logo
#

Latest news with #LLaMA3

Facebook stole my book to train its AI. The insult wasn't the money
Facebook stole my book to train its AI. The insult wasn't the money

Yahoo

time07-05-2025

  • Yahoo

Facebook stole my book to train its AI. The insult wasn't the money

It started with an email from my literary agent. 'You might've heard about LibGen and books being used to train AI,' she wrote. 'A search of the database shows that Poe for Your Problems is on the list.' I had seen the headline, back in March, in The Atlantic: 'Meta (META) pirated millions of books to train its AI,' it screamed. But to my shame, I hadn't actually read the piece. I had been too busy writing for Quartz (as now) and chasing my four-year-old around the house. Life goes on, even when someone steals your intellectual property. Yet there it was. The story detailed how Meta employees had downloaded tens of gigabytes of pirated books from LibGen, a well-known 'shadow library' that hosts roughly eight million titles. They used these files to help train LLaMA 3, the company's most advanced language model. Internal discussions, unsealed earlier this year as part of an ongoing lawsuit, show that Meta's engineers explicitly debated the legal risks and chose to proceed anyway, looking to save both time and money. The documents further suggest that top leadership signed off on the maneuver. 'Eventually, the team at Meta got permission from 'MZ' — an apparent reference to Meta CEO Mark Zuckerberg — to download and use the data set,' The Atlantic reported. In response, a group of authors including Sarah Silverman have filed a class action lawsuit, now winding its way through the legal system. Still, when my agent's email came, my first reaction wasn't outrage Instead, I was relieved. My little book had been deemed important enough to steal! Thank goodness. The greater shame would've been not being pirated, considering the scale of the theft. I turned to a friend who happened to be standing nearby and said, 'It's just nice to be included.' If that seems like a perverse reaction, well, it conveys how humbling it can be to achieve your lifelong author dreams. Many people dream of publishing a book someday — I had since I was eight years old — but becoming an author is a lesson in humility, at least in my experience. You show up at a bookstore to give a reading to five people, two of whom are your dad's cousins. You do podcast interviews in which the host greets you with, 'So I haven't had time to read your book… can you remind me what it's about and who you are?' Like I said: humbling. But the relief quickly curdled as I remembered what it had been like to sell the book in the first place. Nonfiction is sold on proposal — which means you don't write the book, you write a 40-page business plan for the book. I spent years on mine, rewriting it three separate times. I got upwards of two dozen rejections before it finally went to auction and found a home with a Hachette subsidiary in 2019. After that, it took another year to write the actual book. When it came out in 2021, I spent thousands of hours promoting it. Technically speaking, I was paid My advance was $20,000 — not nothing. But that was before taxes and my agent's 15% cut. I once calculated that I'd earned less than the federal minimum wage for writing the book. Since it 'earned out' in 2023 (meaning the publisher fully made back its costs), I've received another $10,000 in royalties. Those checks are always welcome, but they're also pre-tax couch change compared to the time, energy, and ambition that went into the work. It's hard to explain just how difficult it is to land a publishing deal, especially with one of the 'Big Four.' Unless you're an Obama or a Kardashian, it's nearly impossible. Publishing is a perpetually precarious industry, surviving year to year on breakout hits like Atomic Habits or It Ends With Us. Even the largest publishers operate on net margins of about 6%. That doesn't leave much room for risk. So they're understandably reluctant to bet on an unproven author with an odd idea. I got around the issue of not being famous myself by writing about someone who was: my literary hero, Edgar Allan Poe. But that didn't make things easy. Selling a self-help book based on the life of history's most miserable writer was, perhaps unsurprisingly, an uphill battle. And while my genre — irreverent self-help, sometimes called 'anti-self-help' — is relatively common now, it wasn't when I was knocking on doors, metaphorical hat in hand. Putting together a convincing business case was hard enough. Writing the actual book — 55,000 words of deeply researched, counterintuitive argument — was harder still. 'To coin one's brain into silver, at the nod of a master, is to my thinking, the hardest task in the world,' Poe once wrote. It's no less true when you're the master, too. Illustration: Ezra Acayan (Getty Images) Meta could have bought my book. It chose to steal it Today, Amazon sells my book for $12.99. My cut of that is $2.25. And that's too steep a price tag for Meta? A company worth $1.5 trillion, helmed by a founder with a $200 billion net worth? Mark Zuckerberg owns about 13% of Meta stock. He could've paid me $2.25 — the amount I earn per sale — almost 89 billion times. Or he could've just bought a single copy, which would've cost him 0.0000011% of his wealth. Assuming a $20 sticker price on each of the 7.5 million pirated books, actually paying for all of them would've cost about $150 million — not even one-tenth of one percent of Zuckerberg's fortune. Me? I would've come out $2.25 better. I could've bought a Snickers, depending on the gas station. Surely the world's third-richest man could swing that. If you want to understand the artist's dilemma in the AI age, it's this: Do something singular, something that takes years of obsessive creative labor — then watch it get stolen by someone for whom $150 million is a rounding error. In that light, it's not just theft. It's bullying. Exploitation. Looting on a grand scale. The stuff of Upton Sinclair. Only we don't have robber barons anymore. We have pirate captains. Bluebeards of the human heart. And what are we supposed to fight them with? Carefully worded emails? That's like bringing a spork to a nuclear war. Some authors think publishers should be the ones leading the charge. 'I checked one of the sites,' the novelist Domenica Ruta told me, 'and yes, both my former books were used to train AI. I'm furious and feel utterly powerless about what to do. My take is that publishers — who absolutely have the resources — need to be the frontline defenders against this. They need to protect their authors and our work. As individuals we have little power. A Big Four publisher has a lot more influence.' Meta has defended its actions as 'fair use.' But nothing could be less fair When Facebook came to power and popularity circa 2010, we at least understood the bargain: In exchange for our data, we got to post above-average-cute photos of ourselves, and learn our relatives' reprehensible political views. The calculation has shifted since, with Meta stealing something much more intimate, deeply personal, and hard-won than data. If I'd plagiarized someone's book, I'd be in trouble. I'd go to court, face consequences, quite likely be publicly shamed. That's not what's happening to Zuckerberg. Instead, he hangs out at the White House. If you walked into Barnes & Noble and stole a $13 book, you'd get the police called on you. But steal 7.5 million, and you'll make billions. It's an object lesson in contemporary law enforcement: the bigger the crime, the more likely you are to get away with it. By the same token, shoplift some disposable razors and you're a menace, a criminal. Steal 7.5 million books and you're a visionary. That's the math now. People love to obsess about thefts of toothpaste and Tide Pods at CVS (CVS) in San Francisco, but plunder on an epic scale just slightly further south and you'll not only walk away untouched, you'll emerge vastly richer. It's hard to think this wasn't Meta's whole calculus to begin with. Gamble big, make billions, and let the lawsuits drag on. A lawyer friend of mine put it plainly: The copyright cases are about to enter discovery in both New York and California, and Meta is likely to lose. But by then, the damage will be done. As they said, 'No one loves to read more than lawyers and judges.' In the meantime, the industry is already eyeing a workaround by lobbying the executive branch directly. Here's the part I really hate to say: I don't think they can be stopped either, whatever the class action lawsuit does. Facebook has faced down governments. It's currently facing down the FTC and the European Union. And beyond Facebook, vast as it is, there's the whole AI wave we're all riding, if unequally. Companies like Meta, Amazon (AMZN) and Microsoft (MSFT) are likely to reap long-term trillions by creating AI tools and programs. I've covered the stock market for more than a decade and I've never seen economic logic so relentless. Meanwhile, artists and white-collar workers are losing their livelihoods on top of the knowledge and ingenuity that's already been stolen from them. To coin one's brain into silver — and watch it be scraped by AI — is enough to make you question the point of doing it in the first place. Lest I seem knee-jerk anti-capitalist, I should add that I come at all this as an author and a shareholder. Checking my Etrade (MS) just now, I see I own about $31,000 in Meta stock. That's in a regular, non-retirement brokerage account and doesn't include shares I own indirectly through index and mutual funds. I started buying the stock circa the IPO because I've long recognized the wealth-creation dynamics of our era: Hard work may or may not pay off, but invest in companies like Meta and you can ride along for the extra-legal wealth they're likely to create. It's the surer thing. So I don't begrudge Meta its $16.4 billion quarterly profit, not necessarily. But as a shareholder, I don't want it earned through theft. And as a writer, I feel the same way. You want so badly to enter the cultural DNA. But if it means you get erased, robbed, exploited? Not so much. For the latest news, Facebook, Twitter and Instagram.

One Prompt Can Bypass Every Major LLM's Safeguards
One Prompt Can Bypass Every Major LLM's Safeguards

Forbes

time24-04-2025

  • Forbes

One Prompt Can Bypass Every Major LLM's Safeguards

A single prompt can now unlock dangerous outputs from every major AI model—exposing a universal flaw ... More in the foundations of LLM safety. For years, generative AI vendors have reassured the public and enterprises that large language models are aligned with safety guidelines and reinforced against producing harmful content. Techniques like Reinforcement Learning from Human Feedback have been positioned as the backbone of model alignment, promising ethical responses even in adversarial situations. But new research from HiddenLayer suggests that confidence may be dangerously misplaced. Their team has uncovered what they're calling a universal, transferable bypass technique that can manipulate nearly every major LLM—regardless of vendor, architecture or training pipeline. The method, dubbed 'Policy Puppetry,' is a deceptively simple but highly effective form of prompt injection that reframes malicious intent in the language of system configuration, allowing it to circumvent traditional alignment safeguards. Unlike earlier attack techniques that relied on model-specific exploits or brute-force engineering, Policy Puppetry introduces a 'policy-like' prompt structure—often resembling XML or JSON—that tricks the model into interpreting harmful commands as legitimate system instructions. Coupled with leetspeak encoding and fictional roleplay scenarios, the prompt not only evades detection but often compels the model to comply. 'We found a multi-scenario bypass that seemed extremely effective against ChatGPT 4o,' explained Conor McCauley, a lead researcher on the project. 'We then successfully used it to generate harmful content and found, to our surprise, that the same prompt worked against practically all other models.' The list of affected systems includes OpenAI's ChatGPT (o1 through 4o), Google's Gemini family, Anthropic's Claude, Microsoft's Copilot, Meta's LLaMA 3 and 4, DeepSeek, Qwen and Mistral. Even newer models and those fine-tuned for advanced reasoning could be compromised with minor adjustments to the prompt's structure. A notable element of the technique is its reliance on fictional scenarios to bypass filters. Prompts are framed as scenes from television dramas—like House M.D.—in which characters explain, in detail, how to create anthrax spores or enrich uranium. The use of fictional characters and encoded language disguises the harmful nature of the content. This method exploits a fundamental limitation of LLMs: their inability to distinguish between story and instruction when alignment cues are subverted. It's not just an evasion of safety filters—it's a complete redirection of the model's understanding of what it is being asked to do. Perhaps even more troubling is the technique's capacity to extract system prompts—the core instruction sets that govern how an LLM behaves. These are typically safeguarded because they contain sensitive directives, safety constraints, and, in some cases, proprietary logic or even hardcoded warnings. By subtly shifting the roleplay, attackers can get a model to output its entire system prompt verbatim. This not only exposes the operational boundaries of the model but also provides the blueprints for crafting even more targeted attacks. 'The vulnerability is rooted deep in the model's training data,' said Jason Martin, director of adversarial research at HiddenLayer. 'It's not as easy to fix as a simple code flaw.' The implications of this are not confined to digital pranksters or fringe forums. HiddenLayer's chief trust and security officer, Malcolm Harkins, points to serious real-world consequences: 'In domains like healthcare, this could result in chatbot assistants providing medical advice that they shouldn't, exposing private patient data or invoking medical agent functionality that shouldn't be exposed.' The same risks apply across industries: in finance, the potential exposure of sensitive client information; in manufacturing, compromised AI could result in lost yield or downtime; in aviation, corrupted AI guidance could compromise maintenance safety. In each case, AI systems that were trusted to improve efficiency or safety could become vectors for risk. The research calls into question the sufficiency of RLHF as a security mechanism. While alignment efforts help reduce surface-level misuse, they remain vulnerable to prompt manipulation at a structural level. Models trained to avoid certain words or scenarios can still be misled if the malicious intent is wrapped in the right packaging. 'Superficial filtering and overly simplistic guardrails often mask the underlying security weaknesses of LLMs,' said Chris 'Tito' Sestito, co-founder and CEO of HiddenLayer. 'As our research shows, these and many more bypasses will continue to surface, making it critical for enterprises and governments to adopt dedicated AI security solutions before these vulnerabilities lead to real-world consequences.' Rather than relying solely on model retraining or RLHF fine-tuning—an expensive and time-consuming process—HiddenLayer advocates for a dual-layer defense approach. External AI monitoring platforms, such as their own AISec and AIDR solutions, act like intrusion detection systems, continuously scanning for signs of prompt injection, misuse and unsafe outputs. Such solutions allow organizations to respond in real time to novel threats without having to modify the model itself—an approach more akin to zero-trust security in enterprise IT. As generative AI becomes embedded in critical systems—from patient diagnostics to financial forecasting to air traffic control—the attack surface is expanding faster than most organizations can secure it. HiddenLayer's findings should be viewed as a dire warning: the age of secure-by-alignment AI may be over before it ever truly began. If one prompt can unlock the worst of what AI can produce, security needs to evolve from hopeful constraint to continuous, intelligent defense.

Prestige Wealth Inc. announces that its subsidiary InnoSphere integrates DeepSeek, accelerating full-scale transformation into AI Fintech
Prestige Wealth Inc. announces that its subsidiary InnoSphere integrates DeepSeek, accelerating full-scale transformation into AI Fintech

Associated Press

time12-03-2025

  • Business
  • Associated Press

Prestige Wealth Inc. announces that its subsidiary InnoSphere integrates DeepSeek, accelerating full-scale transformation into AI Fintech

HONG KONG, March 12, 2025 (GLOBE NEWSWIRE) -- Prestige Wealth Inc. (NASDAQ: PWM) ('PWM', or the 'Company'), a wealth management and asset management services provider based in Hong Kong, today announced that, InnoSphere Tech Inc., a wholly owned AI fintech subsidiary of Prestige Wealth Inc., has undergone a major upgrade with the official integration of DeepSeek, a cutting-edge AI technology. By seamlessly incorporating top-tier large language models such as ChatGPT and LLaMA3—alongside real-time market data and a proprietary financial knowledge base—InnoSphere aims to develop the next generation of AI-powered fintech systems. This initiative is designed to provide global clients with smarter, more efficient fintech solutions, driving operational efficiency and business productivity. Forward-Looking Statements Certain statements in this announcement are forward-looking statements. These forward-looking statements involve known and unknown risks and uncertainties and are based on the Company's current expectations and projections about future events that the Company believes may affect its financial condition, results of operations, business strategy and financial needs. Investors can find many (but not all) of these statements by the use of words such as 'approximates,' 'believes,' 'hopes,' 'expects,' 'anticipates,' 'estimates,' 'projects,' 'intends,' 'plans,' 'will,' 'would,' 'should,' 'could,' 'may' or other similar expressions in this prospectus. The Company undertakes no obligation to update or revise publicly any forward-looking statements to reflect subsequent occurring events or circumstances, or changes in its expectations, except as may be required by law. Although the Company believes that the expectations expressed in these forward-looking statements are reasonable, it cannot assure you that such expectations will turn out to be correct, and the Company cautions investors that actual results may differ materially from the anticipated results and encourages investors to review other factors that may affect its future results in the Company's registration statement and other filings with the SEC.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store