logo
Meta may have used books by Gerry Adams to train AI

Meta may have used books by Gerry Adams to train AI

Yahoo09-04-2025

Former Sinn Féin president Gerry Adams is among a number of authors whose books may have been accessed by technology company Meta to train their latest AI (artificial intelligence) model.
Mr Adams said the books had "been used without his permission", and the matter is now with his solicitor.
An investigation by The Atlantic magazine revealed Meta may have accessed millions of pirated books and research papers through LibGen - Library Genesis - to train its generative AI (Gen-AI) system, Llama.
A spokesperson for Meta said: "We respect third-party intellectual property rights and believe our use of information to train AI models is consistent with existing law."
The Atlantic magazine also published a database of books that had been pirated by LibGen, so many authors have been able to find out if their work appears on the site.
When BBC News NI searched the database a number of authors from Northern Ireland appeared on the list, including Jan Carson, Lynne Graham, Deric Henderson, and Booker prize winner Anna Burns.
Authors from around the world have been organising campaigns to encourage governments to intervene.
Meta, which owns Facebook, Instagram and WhatsApp, is currently defending a court case brought by multiple authors over the use of their work.
Michael Taylor, a historian from Ballymena, said it is "infuriating" that Meta may have used his work.
Two of his books, The Interest and Impossible Monsters, both appear on the LibGen database.
"Writers spend years on their books, and contrary to what anybody thinks, very few people make enough money out of writing to live by their pen," he said.
"Meta might be worth more than a trillion dollars, and it might be politically untouchable, but by violating the copyright of so many thousands of books, its actions amount to the single greatest and the most lucrative act of theft in history."
Prof Monica McWilliams is an academic and former politician who has written extensively about the Northern Ireland peace process and domestic violence.
More than 20 of her academic papers and books appear on the database, including those on intimate partner violence and domestic violence against women during conflict.
She said when it came to her attention, she found it "quite shocking".
"The first principle in the academic world is that you direct your reader to your source material, and that isn't happening here," she said.
"It begs the question of what does copyright even mean anymore.
Prof McWilliams donates the royalties from sales of her writing to domestic violence charities like Women's Aid.
"If royalties are not being paid for the work to be used, then ultimately it is the charities that will lose out."
Last week, authors gathered in London to protest against Meta's actions and high profile authors including Kate Mosse, Richard Osman, and Val McDermid signed an open letter calling on the Culture Secretary Lisa Nandy to bring Meta management to parliament.
Posting on X, Richard Osman, who wrote the popular Thursday Murder Club series, said: "Copyright law is not complicated at all. If you want to use an author's work you need to ask for permission.
"If you use it without permission you're breaking the law. It's so simple.
"It'll be incredibly difficult for us, and for other affected industries, to take on Meta, but we'll have a good go!"
Llama is a large language model, or LLM, similar to Open AI's ChatGPT and Google's Gemini.
The systems are fed huge amounts of data and trained to spot patterns within it. They use this data to create passages of text by predicting the next word in a sequence.
Despite the systems being labelled intelligent, critics argue LLMs do not "think", have no understanding of what they produce and can confidently present errors as fact.
Tech companies argue that they need more data to make the systems more reliable, but authors, artists, and other creatives say they should pay for the privilege.
'We need to speak up': Authors protest against Meta training AI on their work

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How to tell if a login alert is real or a scam
How to tell if a login alert is real or a scam

Fox News

time30 minutes ago

  • Fox News

How to tell if a login alert is real or a scam

Online scams thrive on the urgency and fear of their victims. If you've ever been a victim of a scam, you'd know that bad actors often try to rush you into taking action by creating a sense of fear. A scammer may call you impersonating a government agency and claim your Social Security number has been linked to drug trafficking. A phishing email might ask you to update your tax details or claim you've won a lottery or a free product, all to get you to click a malicious link. A more effective tactic scammers use is sending fake login alerts. These are warnings that someone has logged into your account, prompting you to take immediate action. This method works well because legitimate services like Google, Apple, Netflix and Facebook also send these types of notifications when someone, including you, logs in from a new device. It can be tricky to tell the difference. As Robert from Danville asks, "I constantly get in my spam junk folder emails saying 'someone has logged into your account.' Is this spam? legitimate? concerning? How do I know? How to avoid wasting time checking? How do I check?" Thanks for writing to us, Robert. I completely understand how tricky it can be to figure out whether these messages are legitimate or just another scam attempt. Let's break down what these urgent warnings usually look like and go over a few ways you can stay safe. Scammers often pose as login alerts from Google, Apple, Meta or even your bank, complete with official-looking logos, because fear is effective. But not every alert is a scam. In many cases, these notifications are legitimate and can help you detect unauthorized access to your accounts. Let's focus on the scam side first. Login alert scams have been around for a while. Early reports date back to 2021, and the trend has persisted since then. In 2022, reports surfaced that scammers were impersonating Meta and sending phishing emails to users. One such email used a clean layout with minimal text. It avoided the usual scare tactics and stuck to a simple message. But that is not always the case. A common red flag in phishing attempts is the tendency to overload the email with unnecessary details. These messages often include cluttered formatting, excessive explanations and an increasing number of typos or design errors. One phishing email simply gets to the point: Someone tried to Iog into Your Account, User lD A user just logged into your Facebook account from a new device Samsung S21. We are sending you this email to verify it's really you. Thanks, The Facebook Team What's concerning now is that poor grammar is no longer a reliable sign of a scam. Thanks to AI, even those with limited English skills can write emails that sound polished and professional. As a result, many phishing messages today read just like legitimate emails from trusted companies. Receiving a phishing email is not the real issue. The real problem starts when you click on it. Most of these emails contain links that lead to fake login pages, designed to look exactly like platforms such as Facebook, Google or your bank. If you enter your credentials there, they go directly to the scammer. In some cases, simply clicking the link can trigger a malware download, especially if your browser is outdated or your device lacks proper security. Once inside, attackers can steal personal information, monitor your activity or take control of your accounts. Real login notifications do exist; they're just much less scary. A genuine alert from Google, Apple or Microsoft will come from an official address (for example, no-reply@ or security@ and use consistent branding. The tone is factual and helpful. For instance, a legit Google security alert might say, "We detected a login from a new sign-in to your Google Account on a Pixel 6 Pro device. If this was you, you don't need to do anything. If not, we'll help you secure your account." It may include a "Check activity" button, but that link always redirects to a address, and it won't prompt you to reenter your password via the email link. Similarly, Apple notes it will never ask for passwords or verification codes via email. 1. Don't click any links or attachments and use strong antivirus software: Instead, manually log in to the real site (or open the official app) by typing the URL or using a bookmarked link. This guarantees you're not walking into a scammer's trap. The FTC recommends this: if you have an account with that company, contact them via the website or phone number you know is real, not the info in the email. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. 2. Remove your data from the internet: Scammers are able to send you targeted messages because your data, like your email address or phone number, is already out there. This often happens due to past data breaches and shady data brokers. A data removal service can help clean up your digital trail by removing your information from public databases and people-search sites. It's not a quick fix, but over time, it reduces how easily scammers can find and target you. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren't cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It's what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web. 3. Check your account activity: Go to your account's security or sign-in page. Services like Gmail, iCloud or your bank let you review recent logins and devices. If you see nothing unusual, you're safe. If you do find a strange login, follow the site's process (usually changing your password and logging out all devices). Even if you don't find anything odd, change your password as a precaution. Do it through the official site or app, not the email. Consider using a password manager to generate and store complex passwords. 4. Enable two-factor authentication (2FA): This is your best backup. With 2FA enabled, even if someone has your password, they can't gain access without your phone and an additional second factor. Both Google and Apple make 2FA easy and say it "makes it harder for scammers" to hijack your account. 5. Report suspicious emails: If you receive a suspicious email claiming to be from a specific organization, report it to that organization's official support or security team so they can take appropriate action. You shouldn't have to vet every sketchy email. In fact, your email's spam filters catch most phishing attempts for you. Keep them enabled, and make sure your software is up to date so that malicious sites and attachments are blocked. Still, the most powerful filter is your own awareness. You're definitely not alone in this. People receive these spammy login scares every day. By keeping a cool head and following the steps above, you're already ahead of the game. Have you ever encountered a suspicious email or phishing attempt? How did you handle it, and what did you learn from the experience? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels Answers to the most asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

Reddit Lawsuit Against Anthropic AI Has Stakes for Sports
Reddit Lawsuit Against Anthropic AI Has Stakes for Sports

Yahoo

timean hour ago

  • Yahoo

Reddit Lawsuit Against Anthropic AI Has Stakes for Sports

In a new lawsuit, Reddit accuses AI company Anthropic of illegally scraping its users' data—including posts authored by sports fans who use the popular online discussion platform. Reddit's complaint, drafted by John B. Quinn and other attorneys from Quinn Emanuel Urquhart & Sullivan, was filed on Wednesday in a California court. It contends Anthropic breached the Reddit user agreement by scraping Reddit content through its web crawler, ClaudeBot. The web crawler provides training data for Anthropic's AI tool, Claude, which relies on large language models (LLMs) that distill data and language. More from Prime Video NASCAR Coverage Uses AI to Show Hidden Race Story Indy 500 Fans Use Record Amount of Data During Sellout Race Who Killed the AAF? League's Demise Examined in Latest Rulings Other claims in the complaint include tortious interference and unjust enrichment. Scraping Reddit content is portrayed as undermining Reddit's obligations to its more than 100 million daily active unique users, including to protect their privacy. Reddit also contends Anthropic subverts its assurances to users that they control their expressions, including when deleting posts from public view. Scraping is key to AI. Automated technology makes requests to a website, then copies the results and tries to make sense of them. Anthropic, Reddit claims, finds Reddit data 'to be of the highest quality and well-suited for fine-tuning AI models' and useful for training AI. Anthropic allegedly violates users' privacy, since those users 'have no way of knowing' their data has been taken. Reddit, valued at $6.4 billion in its initial public offering last year, has hundreds of thousands of 'subreddits,' or online communities that cover numerous shared interests. Many subreddits are sports related, including r/sports, which has 22 million fans, r/nba (17 million) and the college football-centered r/CFB (4.4 million). Some pro franchises, including the Miami Dolphins (r/miamidolphins) and Dallas Cowboys (r/cowboys), have official subreddits. Reddit contends its unique features elevate its content and thus make the content more attractive to scraping endeavors. Reddit users submit posts, which can include original commentary, links, polls and videos, and they upvote or downvote content. This voting influences whether a post appears on the subreddit's front page or is more obscurely placed. Subreddit communities also self-police, with prohibitions on personal attacks, harassment, racism and spam. These practices can generate thoughtful and detailed commentary. Reddit estimates that ClaudeBot's scraping of Reddit has 'catapulted Anthropic into its valuation of tens of billions of dollars.' Meanwhile, Reddit says the company and its users lose out, because they 'realize no benefits from the technology that they helped create.' Anthropic allegedly trained ClaudeBot to extract data from Reddit starting in December 2021. Anthropic CEO Dario Amodei is quoted in the complaint as praising Reddit content, especially content found in prominent subreddits. Although Anthropic indicated it had stopped scraping Reddit in July 2024, Reddit says audit logs show Anthropic 'continued to deploy its automated bots to access Reddit content' more than 100,000 times in subsequent months. Reddit also unfavorably compares Anthropic to OpenAI and Google, which are 'giants in the AI space.' Reddit says OpenAI and Google 'entered into formal partnerships with Reddit' that permitted them to use Reddit content but only in ways that 'protect Reddit and its users' interests and privacy.' In contrast, Anthropic is depicted as engaging in unauthorized activities. In a statement shared with media, an Anthropic spokesperson said, 'we disagree with Reddit's claims, and we will defend ourselves vigorously.' In the weeks ahead, attorneys or Anthropic will answer Reddit's complaint and argue the company has not broken any laws. Reddit v. Anthropic has implications beyond the posts of Reddit users. Web crawlers scraping is a constant activity on the Internet, including message boards, blogs and other forums where sports fans and followers express viewpoints. The use of this content to train AI without knowledge or explicit consent by users is a legal topic sure to stir debate in the years ahead. Best of College Athletes as Employees: Answering 25 Key Questions

Trump Taps Musk to 'Rebuild Government from the Ground Up,' Says One Tech Insider
Trump Taps Musk to 'Rebuild Government from the Ground Up,' Says One Tech Insider

Yahoo

time2 hours ago

  • Yahoo

Trump Taps Musk to 'Rebuild Government from the Ground Up,' Says One Tech Insider

New briefing uncovers AI facility in Tennessee designed to power America's future — and it's not run by Washington, but by Elon Musk BALTIMORE, June 08, 2025 (GLOBE NEWSWIRE) -- In a newly surfaced public briefing, bestselling author and tech analyst James Altucher reveals what he calls a 'massive transfer of control' inside the federal government — one that began on Day One of President Trump's return to the White House. According to Altucher, Trump isn't just slashing bureaucracy — he's outsourcing innovation to Elon Musk. The result is Project Colossus: a 200,000-chip AI supercomputer hidden inside a Memphis warehouse and operated entirely outside the traditional system. A Silent Power Shift — Signed by Trump 'In one of his FIRST acts as President… Donald Trump overturned Executive Order #14110.' That reversal, Altucher says, stripped away Biden's AI restrictions — immediately giving private operators like Musk the runway to build freely. Trump then revealed Stargate, a $500 billion AI infrastructure initiative that, according to Altucher, is 'not about building government… it's about replacing it.' Musk's AI Is Already Online 'Right here, inside this warehouse in Memphis, Tennessee… lies a massive supercomputer Musk calls 'Project Colossus.'' 'Making it the most advanced AI facility known to man.' Altucher claims that the system is already operational — and is expected to expand dramatically before July 1, when a major upgrade could '10X its power overnight.' Not Reform. Replacement. According to Altucher, Musk and Trump aren't just reforming the system — they're replacing it with autonomous intelligence designed to streamline decisions, reduce costs, and eliminate delay. 'AI 2.0… gives that knowledge to intelligent machines that I believe will solve our problems for us.' Altucher warns that what began as an infrastructure story is fast becoming one of control — and that the real question now is: who governs the machines? About James Altucher James Altucher is a computer scientist, entrepreneur, and bestselling author with four decades of experience in artificial intelligence. He studied at Cornell and Carnegie Mellon, helped develop IBM's Deep Blue, and has built AI-powered systems for use in finance and enterprise. His latest briefings focus on how AI is being deployed beyond the public's view — and who's behind it. Media Contact:Derek WarrenPublic Relations ManagerParadigm Press GroupEmail: dwarren@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store