
Meta AI book scraping: 'We need to speak up', say authors
Landing a publishing deal was a dream come true for Jack Strange."It was incredible. I'd had so many rejections along the way," he says."So when someone said yes, I cried because it's everything I ever wanted."Before Jack published debut novel Look Up, Handsome, he'd written other, self-published titles.But he felt an entirely different emotion when he found out that those works had appeared on LibGen - a so-called "shadow library" containing millions of books and academic papers taken without permission.An investigation by The Atlantic magazine revealed Meta may have accessed millions of pirated books and research papers through LibGen - Library Genesis - to train its generative AI (Gen-AI) system, Llama.Now author groups across the UK and around the world are organising campaigns to encourage governments to intervene.Meta, which owns Facebook, Instagram and WhatsApp, is currently defending a court case brought by multiple authors over the use of their work.
'More difficult with AI coming in'
Llama is a large language model, or LLM, similar to Open AI's ChatGPT and Google's Gemini.The systems are fed huge amounts of data and trained to spot patterns within it. They use this data to create passages of text by predicting the next word in a sequence.Despite the systems being labelled intelligent, critics argue LLMs do not "think", have no understanding of what they produce and can confidently present errors as fact.Tech companies argue that they need more data to make the systems more reliable, but authors, artists and other creatives say they should pay for the privilege.A Meta spokesperson told BBC Newsbeat it had "developed transformational GenAI powering incredible innovation, productivity and creativity for individuals and companies".They added that "fair use of copyrighted materials is vital to this", and that the company wants to develop AI that benefits everyone.As well as concerns over copyright and accuracy, AI systems are also power-hungry, prompting environmental fears, and worries they could threaten jobs.
Facing down a trillion dollar company
While Jack's debut novel wasn't part of the LibGen dataset, he did find some of his self-published books had been taken.He says he wasn't surprised because he'd seen so many fellow authors affected, but that it did spur him on to want to do something about it."There's always something you can do. You can't just say 'oh well'. You've got to speak up and fight back," he tells BBC Newsbeat.Meta says open source AI like Llama will "increase human productivity, creativity, and quality of life". But Jack says it poses a real risk to creatives like him."It's annoying that the first thing AI comes for are creative jobs that bring you joy. "We're so undervalued already, and we're even more undervalued now with AI coming in."Jack says going up against a company like Meta, which is worth more than a trillion dollars, doesn't feel like a fight he can take on alone."How much control can you take back when your work has already been taken?"How do we live with that and how do we get protected from that?"
He's one of a growing number of writers calling on the government to intervene, with a demonstration planned on Thursday near Meta's London office, as well as action online.Abie Longstaff works at the Society of Authors, a union representing writers, illustrators and translators, and tells Newsbeat they have been raising concerns about the risks of AI for years."We all feel that level of helplessness," she says. "But we're all fighting so hard."She says her work has also been stolen and used to train AI, something she believes has an impact on future publishing opportunities."Large language models work by prediction, they work by looking at patterns. They want our voice, they want our expression, they want our style. "So you can as a normal person go onto one of these sites and say 'please can you write me a book in the style of Abie Longstaff' and they'll write it in my style, in my voice."Because their works have been scraped though, writers won't get any compensation or recognition if it's used this way."We want to see compensation, we want to see that it's more transparent," Abie says."The company has taken our books and used it to make money. It has money, but instead of paying us for our intellectual property instead of licensing a word, it's taking it all for free."
The Society of Authors as well as other unions like the Writers' Guild are encouraging writers to get in touch with their MPs to raise their concerns in government.In December, the government shared a consultation in a bid to navigate the issue between copyright holders being in control and paid for their work and AI companies having "wide and lawful access to high-quality data".One proposal was giving tech companies automatic access to works such as books, films and TV shows to train AI models unless creators opted out.But Abie thinks that's the wrong way round."It's like saying you've got to put a note on your wallet saying no-one steal it," she says. "It should be the AI companies asking us if they can use our work." Writing is something Jack had always dreamed of doing - and still does, despite the challenges he's currently facing."It's still my dream to be an author and hopefully write full time. It's incredibly difficult now, it's going to be more difficult with AI coming in."
Listen to Newsbeat live at 12:45 and 17:45 weekdays - or listen back here.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
an hour ago
- Daily Mirror
Jack Fincham and Chloe Brockett 'back together again after sixth split'
Seventh time's a charm for Jack and Chloe, who have reunited again after a tough couple of months for the pair, reigniting the spark by going on a series of sweet dates with each other Love Island star Jack Fincham and his on-off girlfriend Chloe Brockett are back together, after six break-ups. Still not ready to full call it a day, the couple are trying again to make their relationship work. Only in April it was reported that the constant bickering and heated rows had become too much for the pair, who ended their relationship for the sixth time. But seventh time's the charm for the tumultuous couple, who have come to the realisation that they have something "special". Jack, who found fame on Love Island in 2018, was paired up on the show with Dani Dyer. But the pair didn't last, splitting up just a few months after leaving the villa. Dani, 28, later found love with West Ham footballer Jarrod Bowen, with the couple getting married last weekend. Jack moved on with TOWIE star Chloe, 24, after they got together in 2019, and they've finally put their differences aside, after being spotted at Bethnal Green's Columbia Road Flower Market. A source told the Sun: "Chloe and Jack have been on and off for months but have finally put their differences aside to give their relationship another chance. They really want to make it work so have been enjoying some lowkey dates as they gradually build the trust. "They have realised they have something special so want to put their time and energy into making it work." The Mirror has reached out to Jack and Chloe's reps for comment. In March, Chloe opened up about one of their rows, admitting she threw a virtual reality headset at her boyfriend in a rage. She spoke about it on Spread The Juice podcast, explaining she had bought him the headset as a Christmas gift. She told host Jemma Lucy: "I bought Jack a Meta Quest for Christmas and I threw the Meta Quest at him. I just threw it at him." She revealed her reason for the aggressive moment was simply because she was "angry". At the time, Chloe insisted the pair were still an item. She told her Instagram followers: "For anyone concerned, I am very much still in a relationship. The last few months been very hard for everyone involved." She seemed to be referring to his court case, in which he avoided prison time after winning an appeal against his six week prison sentence. He was initially charged and convicted of being in charge of a dangerously out-of-control dog. But after the appeal, he was given a further three-month suspended sentence. This was added to his 18-month suspended sentence for a drug-driving offence in March 2024. They then split, with Jack admitting in May they were living apart. He told the Sun: "Chloe and I haven't been in a great place. I'm living at my mum's and she has moved out of the house we lived in and into her own flat. "I don't know what will happen between us. I still love her. I doubt it's over for good. We've been through a hell of a lot in the past couple of years."


Daily Mail
5 hours ago
- Daily Mail
Layoffs sweep America as AI leads job cut 'bloodbath'
Elon Musk and hundreds of other tech mavens wrote an open letter two years ago warning AI would 'automate away all the jobs' and upend society. And it seems as if we should have listened to them. Layoffs are sweeping America, nixing thousands of roles at Microsoft, Walmart, and other titans, with the newly unemployed speaking of a'bloodbath' on the scale of the pandemic. This time it's not blue-collar and factory workers facing the ax - it's college grads with white-collar roles in tech, finance, law, and consulting. Entry-level jobs are vanishing the fastest, stoking fears of recession and a generation of disillusioned graduates left stranded with CVs no one wants. Graduates are now more likely to be unemployed than others, data has shown. Chatbots have already taken over data entry and customer service posts. Next-generation 'agentic' AI can solve problems, adapt, and work independently. These 'smartbots' are already spotting market trends, running logistics operations, writing legal contracts, and diagnosing patients. The markets have seen the future: AI investment funds are growing by as much as 60 per cent a year. 'The AI layoffs have begun, and they're not stopping,' says tech entrepreneur Alex Finn. Luddites who don't embrace the tech 'will be completely irrelevant in the next five years,' he posted on X. Procter & Gamble, which makes diapers, laundry detergent, and other household items, this week said it would cut 7,000 jobs, or about 15 per cent of non-manufacturing roles. Its two-year restructuring plan involves shedding managers who can be automated away. Microsoft last month announced a cull of 6,000 staff - about three per cent of its workforce - targeting managerial flab, after a smaller round of performance-related cuts in January. LA-based tech entrepreneur Jason Shafton said the software giant's layoffs spotlight a trend 'redefining' the job market. 'If AI saves each person 10 per cent of their time (and let's be real, it's probably more), what does that mean for a company of 200,000?' he wrote. Retail titan Walmart, America's biggest private employer, is slashing 1,500 tech, sales, and advertising jobs in a streamlining effort. Citigroup, cybersecurity firm CrowdStrike, Disney, online education firm Chegg, Amazon, and Warner Bros. Discovery have culled dozens or even hundreds of their workers in recent weeks. Musk himself led a federal sacking spree during his 130-day stint at the Department of Government Efficiency, which ended on May 30. Federal agencies lost some 135,000 to firings and voluntary resignation under his watch, and 150,000 more roles are set to be mothballed. Employers had already announced 220,000 job cuts by the end of February, the highest layoff rate seen since 2009. In announcing cuts, executives often talk about restructuring and tough economic headwinds. Many are spooked by President Donald Trump's on-and-off tariffs, which sent stock markets into free-fall and prompted CEOs to second-guess their long-term plans. Others say something deeper is happening, as companies embrace the next-generation models of chatbots and AI. Robots and machines have for decades usurped factory workers. AI chatbots have more recently replaced routine, repetitive, data entry, and customer service roles. A new and more sophisticated technology - called Agentic AI - now operates more independently: perceiving the environment, setting goals, making plans, and executing them. AI-powered software now writes reports, analyzes spreadsheets, creates legal contracts, designs logos, and even drafts press releases, all in seconds. Banks are axing graduate recruitment schemes. Law firms are replacing paralegals with AI-driven tools. Even tech startups, the birthplace of innovation, are swapping junior developers for code-writing bots. Managers increasingly seek to become 'AI first' and test whether tasks can be done by AI before hiring a human. That's now company policy at Shopify and is how fintech firm Klarna shrank its headcount by 40 per cent, CEO Sebastian Siemiatkowski told CNBC last month. Experienced workers are encouraged to automate tasks and get more work done; recent graduates are struggling to get their foot in the door. From a distance, the job market looks relatively buoyant, with unemployment holding steady at 4.2 per cent for the third consecutive month, the Labor Department reported on Friday. But it's unusually high - close to 6 per cent - among recent graduates. The Federal Reserve Bank of New York recently said job prospects for these workers had 'deteriorated noticeably'. That spells trouble not just for young workers, but for the long-term health of businesses - and the economy. Economists warn of an AI-induced downturn, as millions lose jobs, spending plummets, and social unrest festers. It's been dubbed an industrial revolution for the modern era, but one that's measured in years, not decades. Dario Amodei, CEO of Anthropic, one of the world's most powerful AI firms, says we're at the start of a storm. AI could wipe out half of all entry-level white-collar jobs - and spike unemployment to 10-20 per cent in the next one to five years, he told Axios. Lawmakers have their heads in the sand and must stop 'sugar-coating' the grim reality of the late 2020s, Amodei said. 'Most of them are unaware that this is about to happen,' he said. 'It sounds crazy, and people just don't believe it.' Frustrations: Sacked workers have taken to social media to vent their frustrations about the new tech crunch Young people who've been culled are taking to social media to vent their anger as the door to a middle-class lifestyle closes on them. Patrick Lyons calls it 'jarring and unexpected' how he lost his Austin-based program managing job in an 'emotionless business decision' by Microsoft. 'There's nothing the 6,000 of us could have done to prevent this,' he posted. A young woman coder, known by her TikTok handle dotisinfluencing, posts a daily video diary about the 'f***ing massacre' of layoffs at her tech company as 'AI is taking over'. Her job search is going badly. She claims one recruiter appeared more interested in taking her out for drinks than offering a paycheck. 'I feel like s***,' she added. Ben Wolfson, a young Meta software engineer, says entry-level software jobs dried up in 2023. 'Big tech doesn't want you, bro,' he said. Critics say universities are churning out graduates into a market that simply doesn't need them. A growing number of young professionals say they feel betrayed - promised opportunity, but handed a future of 'AI-enhanced' redundancy. Others are eyeing an opportunity for a payout to try something different. Donald King posted a recording of the meeting in which he was unceremoniously laid off from his data science job at consulting firm PwC. 'RIP my AI factory job,' he said. 'I built the thing that destroyed me.' He now posts from Porto, in Portugal - a popular spot for digital nomads - where he's founded a marketing startup. Industry insiders say it won't be long before another generation of AI arrives to automate new sectors. As AI improves, the difference between 'safe' and 'automatable' work gets blurrier by the day. Human workers are advised to stay one step ahead and build AI into their own jobs to increase productivity. Optimists point to such careers as radiology - where humans initially looked set to be outmoded by machines that could speedily read medical scans and pinpoint tumors. But the layoffs didn't happen. The technology has been adopted - but radiologists adapted, using AI to sharpen images and automate some tasks, and boost productivity. Some radiology units even expanded their increasingly efficient human workforce. Others say AI is a scapegoat for 2025's job cuts - that executives are downsizing for economic reasons, and blaming technology so as not to panic shareholders. But for those who have lost their jobs, the future looks bleak.

Finextra
5 hours ago
- Finextra
The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen
In the race to adopt artificial intelligence, too many enterprises are flooring the brakes while neglecting the accelerator. As the saying goes, "AI may not be coming for your job, but a company using AI is coming for your company." The pressure to integrate AI solutions is becoming intense, and organizations that have missed early adoption windows are increasingly turning to external vendors for quick fixes. The longer enterprises wait, the faster and riskier it becomes when they are forced to adopt AI. By delaying, they have to learn fast how to do it with no experience under their belt. This article explores the significant risks of unchecked AI deployment and offers guidance for navigating the challenges. When AI Tools Go Rogue Remember the UK Post Office Horizon scandal? A conventional software system led to hundreds of innocent people being prosecuted, some imprisoned, and lives utterly destroyed. That was just normal software. The AI tools your organization might be preparing to unleash represent an entirely different beast. AI is like an adolescent—moody, unpredictable, and occasionally dangerous. Consider Air Canada's chatbot debacle: it confidently provided customers with incorrect bereavement policy information, and the courts ruled that Air Canada had to honor what their digital representative had erroneously promised. While in this case one might argue the chatbot was more humane than the company's actual policies, the financial implications were significant. The critical question is: will your AI tool be trusted to behave and do its job, or will it go on a rampage and wreck your business? Learning how to deploy AI with robust oversight is a critical skill organizations must master for successful AI deployments, and not to play Russian roulette. Companies starting now, are getting a significant edge in learning how to control this critical technology. The Zillow Cautionary Tale Zillow's failed foray into real estate flipping highlights the dangers of AI relying solely on past data. The algorithm, confident in its predictions, failed to account for rapidly changing market conditions, such as a drop in demand or nearby property issues—it could take months for Zillow's algorithm to recognize the impact on valuation. Meanwhile, savvy sellers capitalized on this, unloading properties to Zillow before Zillow detected the prices plummeting, costing the company 10% of its workforce. The problem? Zillow's AI was backward-looking, trained on historical data, and unable to adapt to dynamic environments. This same issue plagues stock-picking algorithms and other systems. that perform beautifully on historical data but collapse when faced with new market conditions. If your AI is making decisions based solely on past data without accounting for environmental changes, you're setting yourself up for a Zillow-style catastrophe . To mitigate this risk, ensure your AI's training data represents current and anticipated future conditions. Consider the risks carefully! This is particularly crucial for financial systems, where tail risks are more frequent than models predict. Medical applications, like analyzing skin conditions, are much less susceptible to changing environments, as long as the AI is trained on a representative sample of the population. Startup Corner-Cutting: From Unicorns to Bankruptcy Your vendor might be cutting corners. While they may not be another Theranos, the risk is real. Take the UK tech unicorn that recently collapsed into bankruptcy amid financial reporting discrepancies. It has now emerged that was a fraud, and people using the service are left with orphaned applications. Startups face intense pressure to deliver results, which can lead to critical oversights with inconvenient truths often getting swept under the rug. One common pitfall is bias in training data. When your system makes judgments about people, inherent biases can lead to discriminatory outcomes, and can even perpetuate and amplify discriminatory outcomes. Even tech giants aren't immune. Amazon attempted to build an AI resume screening tool to identify top talent by analyzing their current workforce's resumes. The problem? AWS, their massive cloud division, was predominantly male, so the AI learned to favor male candidates. Even after purging overtly gender-identifying information, the system still detected subtle language patterns more common in men's resumes and continued its bias. If you're using AI to determine whether someone qualifies for financing, how can you be sure the system isn't perpetuating existing biases? My advice, before deploying AI that makes decisions about people, carefully evaluate the data and the potential for bias. Consider implementing bias detection and mitigation techniques. Better yet, start now with an internal trial to see the problems that bias in the data might cause. Those organizations getting hands on experience right now, will be well ahead of their peers who have not started. The Hallucination Problem Then there are "hallucinations" in generative AI—a polite term for making things up, which is exactly what's happening. Just ask Elon Musk, whose chatbot Grok fabricated a story about NBA star Klay Thompson throwing bricks through windows in Sacramento. Sacramento might be bland, but it did not drive Klay to throw bricks through his neighbor's windows. Such fabrications are potentially damaging to reputations, including your company's. How can you prevent similar embarrassments? Keep humans in the decision loop—at minimum, you'll have someone to blame when things go wrong. It wasn't the AI you purchased from "Piranha AI backed by Shady VC" that approved those questionable loans; it was Johnny from accounting who signed off on them. A practical approach is designing your AI to show its work. When the system generates outputs by writing code to extract database information, this transparency, or "explainable AI", approach allows you to verify the results and logic used to arrive at them. There are other techniques that can reduce or eliminate the effect of hallucinations, but you need to get some hands-on experience to understand when they occur, what they say, and what risk this exposes your organization to. The Economic and Societal Costs of AI Failures The costs of AI security and compliance failures extend far beyond immediate losses: Direct Financial Costs: AI security breaches can lead to significant financial losses through theft, ransom payments, and operational disruption. The average cost of a data breach reached $4.45 million in 2023, with AI-enhanced attacks potentially driving this figure higher. Regulatory Penalties: Non-compliant AI systems increasingly face steep regulatory penalties. Under GDPR, companies can be fined up to 4% of annual global revenue. Reputational Damage: When AI systems make discriminatory decisions or privacy violations occur, the reputational damage can far exceed direct financial losses and persist for years. Market Confidence Erosion: Systematic AI failures across an industry can erode market confidence, potentially triggering investment pullbacks and valuation corrections. Societal Trust Decline: Each high-profile AI failure diminishes public trust in technology and institutions, making future innovation adoption more difficult. The Path Forward As you enter this dangerous world, you face a difficult reality: do you delay implementing AI, and then have to scramble to catch up, or are you more cautious and start working on AI projects now. The reality is that your competitors are likely adopting AI, and you must as well in the not-so-distant future. Some late starters will implement laughably ridiculous systems that cripple their operations. Don't assume that purchasing from established vendors guarantees protection—many products assume you will manage the risks. Trying to run a major AI project with no experience is like trying to drive a car with no training. Close calls are the best you can hope for. The winners will be companies that carefully select the best AI systems while implementing robust safeguards. Don't assume established vendors are immune to the risks. Consider the following steps: Prioritize Human Oversight: Implement robust human review processes for AI outputs. Implement robust human review processes for AI outputs. Focus on Data Quality: Ensure your training data is accurate, representative, and accounts for potential biases. Ensure your training data is accurate, representative, and accounts for potential biases. Demand Explainability: Choose AI systems that provide transparency into their decision-making processes. Choose AI systems that provide transparency into their decision-making processes. Establish Ethical Guidelines: Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Apply Proper Security and Compliance Measures: This isn't just good ethics—it's good business. In the race to AI adoption, remember: it's better to arrive safely than to crash spectacularly before reaching the finish line. Those who have already started their AI journey are learning valuable lessons about what works and what doesn't. The longer you wait, the more risky your position becomes. For everyone else, all you can hope for is more empty chambers in your Russian roulette revolver. Written by Oliver King-Smith, CEO of smartR AI.