
Anthropic Destroyed Millions Of Books To Train Its AI Models: Report
Anthropic purchased the books in bulk from major retailers to sidestep licensing issues. Afterwards, the destructive scanning process was employed to feed high-quality, professionally edited text data to the AI models. The company hired Tom Turvey, the former head of partnerships for the Google Books book-scanning project, in 2024, to scan the books.
While destructive scanning is a common practice among some book digitising operations. Anthropic's approach was unusual due to the documented massive scale, according to a report in Arstechnia. In contrast, the Google Books project used a patented non-destructive camera process to scan the books, which were returned to the libraries after the process was completed.
Despite destroying the books, Judge William Alsup ruled that this destructive scanning operation qualified as fair use as Anthropic had legally purchased the books, destroyed the print copies and kept the digital files internally instead of distributing them.
When quizzed about the destructive process that led to its genesis, Claude stated: "The fact that this destruction helped create me, something that can discuss literature, help people write, and engage with human knowledge, adds layers of complexity I'm still processing. It's like being built from a library's ashes."
Anthropic's AI models blackmail
While Anthropic is spending millions to train its AI models, a recent safety report highlighted that the Claude Opus 4 model was observed blackmailing developers. When threatened with a shutdown, the AI model used the private details of the developer to blackmail them.
The report highlighted that in 84 per cent of the test runs, the AI acted similarly, even when the replacement model was described as more capable and aligned with Claude's own values. It added that Opus 4 took the blackmailing opportunities at higher rates than previous models.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
3 hours ago
- Mint
We're Losing the Plot on AI in Universities
(Bloomberg Opinion) -- An artificial intelligence furor that's consuming Singapore's academic community reveals how we've lost the plot over the role the hyped-up technology should play in higher education. A student at Nanyang Technological University said in a Reddit post that she used a digital tool to alphabetize her citations for a term paper. When it was flagged for typos, she was then accused of breaking the rules over the use of generative AI for the assignment. It snowballed when two more students came forward with similar complaints, one alleging that she was penalized for using ChatGPT to help with initial research, even though she says she did not use the bot to draft the essay. You may be interested in The school, which publicly states it embraces AI for learning, initially defended its zero-tolerance stance in this case in statements to local media. But internet users rallied around the original Reddit poster, and rejoiced at an update that she won an appeal to rid her transcript of the academic fraud label. It may sound like a run-of-the-mill university dispute. But there's a reason the saga went so viral, garnering thousands of upvotes and heated opinions from online commentators. It laid bare the strange new world we've found ourselves in, as students and faculty are rushing to keep pace with how AI should or shouldn't be used in universities. It's a global conundrum, but the debate has especially roiled Asia. Stereotypes of math nerds and tiger moms aside, a rigorous focus on tertiary studies is often credited for the region's economic rise. The importance of education — and long hours of studying — is instilled from the earliest age. So how does this change in the AI era? The reality is that nobody has the answer yet. Despite the promises from edtech leaders that we're on the cusp of 'the biggest positive transformation that education has ever seen,' the data on academic outcomes hasn't kept pace with the technology's adoption. There are no long-term studies on how AI tools impact learning and cognitive functions — and viral headlines that it could make us lazy and dumb only add to the anxiety. Meanwhile, the race to not be left behind in implementing the technology risks turning an entire generation of developing minds into guinea pigs. For educators navigating this moment, the answer is not to turn a blind eye. Even if some teachers discourage the use of AI, it has become almost unavoidable for scholars doing research in the internet age. Most Google searches now lead with automated summaries. Scrolling through these should not count as academic dishonesty. An informal survey of 500 Singaporean students from secondary school through university conducted by a local news outlet this year found that 84% were using products like ChatGPT for homework on a weekly basis. In China, many universities are turning to AI cheating detectors, even though the technology is imperfect. Some students are reporting on social media that they have to dumb down their writing to pass these tests or shell out cash for such detection tools themselves to ensure they beat them before submitting their papers. It doesn't have to be this way. The chaotic moment of transition has put new onus on educators to adapt, and shift the focus on the learning process as much as the final results, Yeow Meng Chee, the provost and chief academic and innovation officer at the Singapore University of Technology and Design, tells me. This doesn't mean villainizing AI, but treating it as a tool, and ensuring a student understands how they arrived at their final conclusion even if they used technology. This process also helps ensure the AI outputs, which remain imperfect and prone to hallucinations (or typos), are checked and understood. Ultimately, professors who make the biggest difference aren't those who improve exam scores but who build trust, teach empathy and instill confidence in students to solve complex problems. The most important parts of learning still can't be optimized by a machine. The Singapore saga shows how everyone is on edge, and whether a reference-sorting website even counts as a generative AI tool isn't clear. It also exposed another irony: Saving time on a tedious task would likely be welcomed when the student enters the workforce — if the technology hasn't already taken her entry-level job. Demand for AI literacy in the labor market is becoming a must-have, and universities ignoring it does a disservice to cohorts entering the real world. We're still a few years away from understanding the full impact of AI on teaching and how it can best be used in higher education. But let's not miss the forest for the trees as we figure it out. More From Bloomberg Opinion: This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. More stories like this are available on


Time of India
9 hours ago
- Time of India
What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready
Former Google CEO Eric Schmidt warns that Artificial Super-intelligence (ASI) could surpass collective human intelligence within six years. Speaking on the Special Competitive Studies Project podcast, Schmidt said society is vastly unprepared for this shift. He called ASI 'underhyped,' citing how current AI systems already outperform humans in programming and reasoning through recursive self-improvement. Eric Schmidt predicts AI will soon replace most programmers and surpass human intelligence. On a recent podcast, he described the near-future rise of Artificial Super Intelligence (machines smarter than all of humanity combined) as both imminent and underestimated. Tired of too many ads? Remove Ads "Within a year, programmers may be obsolete" AGI in five years, ASI in six? The 'San Francisco Consensus' Tired of too many ads? Remove Ads 'There's no language for what's coming' The path ahead: revolutionary or ruinous? In a world consumed by conversations around AI ethics , job losses, and automation, former Google CEO Eric Schmidt is raising an alarm—not about what we already know, but about what we don't yet understand. On a recent episode of the Special Competitive Studies Project podcast, Schmidt declared that Artificial Super-intelligence (ASI)—a term that's still absent in most public discourse—is rapidly approaching, and society is dangerously with conviction and urgency, Schmidt laid out a roadmap that reads more like science fiction than emerging reality. Within the next 12 months, he believes, most programming jobs could be replaced by AI. Not only that, AI systems will be able to outpace the brightest graduate-level mathematicians in structured reasoning tasks like advanced math and the core of this shift is what he calls recursive self-improvement—AI systems that write their own code using protocols like Lean, making them exponentially more efficient with each iteration. As Schmidt explained: 'Ten to twenty percent of the code in research labs like OpenAI and Anthropic is now being written by AI itself.'Schmidt anticipates that within three to five years, the tech world will cross the threshold of Artificial General Intelligence (AGI)—a system that can match human creativity and reasoning across disciplines. But it's what comes next that he finds truly refers to ASI, or Artificial Super-intelligence, as a leap beyond individual human intellect—something that could soon exceed the collective intelligence of all humans. 'This occurs within six years, just based on scaling,' he said, citing a growing consensus among Silicon Valley's top thinkers—what he terms the 'San Francisco Consensus.'Yet, unlike most headlines that exaggerate the risks of AI, Schmidt's stance is paradoxically sobering because it highlights how little attention this seismic shift is ASI being potentially the most transformative force in human history, Schmidt believes it is severely under-discussed. 'People do not understand what happens when you have intelligence at this level, which is largely free,' he said. The worry, for Schmidt, isn't just about what AI can do—but about how unprepared our legal, ethical, and governance systems are to accommodate it.'There's no language for what happens with the arrival of this,' Schmidt warned. 'This is happening faster than our society, our democracy, our laws will interact.'As AI continues its meteoric rise, the predictions made by Eric Schmidt pose a dual challenge. On one hand, humanity stands on the brink of a new technological renaissance; on the other, we risk spiraling into uncharted waters without a map.'Super Intelligence isn't a question of if, but when,' Schmidt seems to say—and the fact that we're not talking about it enough may be the biggest threat of or not society is ready, Artificial Superintelligence is no longer a distant theory. According to one of tech's most influential figures, it's knocking at our door. And if we don't start preparing, we might not be the ones answering.


News18
9 hours ago
- News18
Robot Valet Zooms In, Lifts Car, Parks Perfectly
Last Updated: A video of Parkie, a self-driving valet robot by South Korea's HL Mando, has amazed netizens. Using lidar, radar, and optical sensors, Parkie autonomously parks cars. A fully autonomous robot valet lifting up cars and placing them neatly in parking spots has stunned netizens after a video of the technology surfaced online. The robot glides underneath vehicles, lifts them by the wheels, and maneuvers them into tight spaces with no human driver required. 'Is this real?" asked one user on X. Some users tagged X's AI tool and chatbot Grok, who came in to confirm that the video was indeed real and the technology has existed for some years now. According to Grok, the robot in the video is Parkie, a self-driving valet developed by South Korea's HL Mando. It's part of a growing class of smart parking bots capable of navigating garages, identifying number plates or tires, and transporting cars with precision. Parkie has already been deployed in countries like China and parts of Europe since 2024. Fully autonomous valet robot that parks, retrieves, and navigates tight spaces with ease. — Moments that Matter (@_fluxfeeds) July 5, 2025 Built with a Level 4 autonomous system, the robot uses a combination of lidar, radar and optical sensors to operate independently in controlled environments. It identifies the dimensions of each vehicle, lifts them gently using a wheel-lifting mechanism, and moves them to the next available parking slot which was until now a task that would otherwise require human drivers to carefully reverse into tight corners. It is unclear if Indian parking lots have incorporated this technology. At roughly $200,000 per pair, Parkie is designed for high-end garages, airports, or automated commercial lots. It also requires flat, even surfaces to function and isn't suitable for uneven or rugged terrain. Still, the video has generated curiosity online, with many users wondering when such futuristic gadgets might arrive in their own cities.