
AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years
For the first time, researchers have identified the author and title of a document that's been locked inside a charred scroll for nearly 2,000 years—without peeling back a single layer.
The scroll, PHerc. 172, was recovered from the ruins of Herculaneum, the ancient Roman town buried by the ash and debris of Mount Vesuvius in 79 CE. The scroll is one of three Herculaneum scrolls that now reside at Oxford's Bodleian Libraries.
Thanks to high-resolution scans and some seriously clever machine learning, scholars were able to virtually 'unwrap' the papyrus and read the name inside: On Vices, by the Epicurean philosopher Philodemus.
The treatise—its full name being On Vices and Their Opposite Virtues and In Whom They Are and About What, according to Fine Books Magazine, is basically ancient self-help, exploring how to live a virtuous life by avoiding vice. Philodemus wrote the work in the first century BCE and it is now being read for the first time since it was buried in the devastating volcanic eruption nearly 2,000 years ago.
The discovery—confirmed by multiple research teams—earned the project's collaborators the $60,000 First Title Prize from the Vesuvius Challenge, an open-science competition that's been making ancient texts readable using AI.
In recent years, artificial intelligence has been instrumental in deciphering the ancient, carbonized scrolls from Herculaneum, a Roman town buried by the eruption of Mount Vesuvius in 79. These scrolls, first discovered in the 18th century in what is now known as the Villa of the Papyri, comprise one of the only surviving libraries from the classical world.
Due to their fragile, charred condition, traditional (read: manual) methods of unrolling the scrolls often destroyed them. Now, researchers are using advanced imaging and machine learning to read these texts without ever opening them.
The turning point came in 2015, when scientists used X-ray tomography to read a different ancient scroll from En-Gedi, creating a 3D scan that could be virtually 'unwrapped.' Building on this, researchers at the University of Kentucky developed the Volume Cartographer, a program that uses micro-CT imaging to detect the faint traces of carbon-based ink on the scrolls.
Because the ink contains no metal, unlike many ancient writing materials, a neural network had to be trained to recognize subtle patterns indicating ink on the carbonized papyrus. In 2019, researchers successfully demonstrated this technique, setting the stage for broader applications.
These breakthroughs culminated in the Vesuvius Challenge, launched in 2023 to crowdsource the decoding of unopened scrolls. Participants use AI tools—particularly convolutional neural networks and transformer models—to identify and reconstruct text within the scrolls. In October 2023, the first word ('purple') was read from an unopened scroll, earning a $40,000 prize. The challenge continues, with prizes offered for deciphering additional text and improving the technology.
Brent Seales, a computer scientist at the University of Kentucky and co-founder of the Vesuvius Challenge, told The Guardian that the team's current bottleneck is cleaning, organizing, and enhancing the scan data so that researchers can actually interpret the carbonized ink as text.
Importantly, the digital unwrapping process is guided by human expertise. AI highlights likely areas of ink on the ancient documents, but scholars interpret the patterns to determine if they form coherent words or phrases. The goal is not only to recover lost philosophical texts, many of which are possibly by Epicurus or his followers, but also to establish a scalable system for digitizing and decoding ancient texts—transforming our understanding of the classical world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
4 hours ago
- Bloomberg
Data Centers Pose Threat to Electric Grids, Says US Regulator
Big data centers connecting to power grids is now one of the greatest near-term risks to reliability, according to a rare warning by the US agency charged with overseeing the sector. The sprawling campuses responsible for AI and cryptocurrency mining are being developed at a faster pace than the power plants and transmission lines needed to support them, 'resulting in lower system stability,' said the North American Electric Reliability Corp. in a report Thursday. That's because data centers require tremendous amounts of power at unpredictable intervals, and are also sensitive to swings in grid voltage — making them a major wild card in a electricity system that's unprepared for such energy use.
Yahoo
4 hours ago
- Yahoo
Toronto's AI Education Push Faces a Critical Test: Can It Be Inclusive?
TORONTO, June 12, 2025 /CNW/ - As artificial intelligence transforms industries, Toronto is emerging as a key player in preparing the next generation for a digital world. But a crucial question is now taking center stage: Who gets left behind? That question was front and centre at a high-profile panel during the 94th Congress of the Humanities and Social Sciences—Canada's largest academic gathering—held at George Brown College (GBC) from May 30 to June 6, 2025. The session brought together leaders in data science, innovation, and education to explore how Canada can build AI literacy without deepening existing inequalities. The discussion featured Debra Lam, Founding Executive Director of the Partnership for Inclusive Innovation, Heather Krause, data scientist and founder of We All Count, and Ryan Morrison, moderator and GBC Professor, English as a Second Language ESL. Morrison's current research concerns AI and its impacts on the language arts. He said the pace of change with this technology has been relentless. As he put it, "Every time our [research] group met, something new had happened." "So, I said, 'Whatever, I'll write a white paper.' And within two years, it was already irrelevant," he added. Closing the Innovation-Education Gap The panel emphasized that while AI's potential is enormous, the rollout of AI education must be equitable, deliberate, and community led. Slowing down and piloting thoughtfully is vital, urged Debra Lam. Such work must be done in collaboration between academia, government, and industry. "In academia, the big incentive is publication ... [while the] private sector, they need to provide some sort of return … [and for the] public sector, they need to report to their constituents," she said. "Once we understand what drives them and what their timelines are, then we can meet in the middle … and share the risk if we can do it together." Toronto, with its world-class education institutions, thriving tech ecosystem, and diverse communities, is uniquely positioned to lead this inclusive approach, panelists argued. Yet only if equity is taken into consideration from the outset, not treated as an afterthought. Lessons from Georgia: A Cautionary Tale Lam pointed to her home state of Georgia as an example of what can go wrong when policy outpaces preparation. In 2019, Georgia mandated computer science education for all high school students. But with little investment in training or infrastructure, biology teachers were suddenly tasked with teaching coding. The result: uneven quality and growing disparities between well-resourced and under-resourced schools. Speed alone isn't the goal, the panelists agreed. They argued the priority lies in quality, access, and long-term success in adopted AI in an educational setting. A Blueprint for Purpose-Driven AI Education Despite the challenges, panelists were optimistic. If educators, policymakers, and tech leaders can work together -- grounded in community needs and cultural humility -- Toronto could become a global model for inclusive, future-ready AI education. Not just quickly, but inclusively, and with purpose. George Brown College is helping to shape this future through its Applied A.I. Solutions Development postgraduate program (T431), designed to equip students with real-world AI skills while emphasizing ethics and inclusion. If you're interested in scheduling an interview with one of George Brown College's in-house experts, please reach out to Saron Fanel, External Communications Specialist About George Brown College Toronto's George Brown College is located on the traditional territory of the Mississaugas of the Credit First Nation and other Indigenous peoples. George Brown prepares innovative, adaptable graduates with the skills to thrive in a rapidly changing job market. With three campuses in the downtown core, the college blends theory with experiential learning, applied research, and entrepreneurship opportunities. George Brown offers 175 full-time programs and 182 continuing education certificates/designations across a wide variety of professions to more than 30,100 full-time students and receives more than 53,900 continuing education registrations annually. Students can earn certificates, diplomas, graduate certificates, apprenticeships, and degrees. SOURCE George Brown College View original content:
Yahoo
5 hours ago
- Yahoo
House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race
FIRST ON FOX – Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administration (NSA) to develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. The bill, dubbed the "Advanced AI Security Readiness Act," directs the NSA's Artificial Intelligence Security Center to develop an "AI Security Playbook to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems." It is co-sponsored by House Select Committee on China Chairman Rep. John Moolenaar, R-Mich., Ranking Member Rep. Raja Krishnamoorthi, D-Ill., and Rep. Josh Gottheimer, D-N.J. LaHood, who sits on the House Intelligence Committee and the House Select Committee on China, told Fox News Digital that the legislative proposal, if passed, would be the first time Congress codifies a "multi-prong approach to ensure that the U.S. remains ahead in the advanced technology race against the CCP." The new bill follows another bipartisan legislative proposal, the "Chip Security Act," which he introduced in late May. That proposal aims to improve export control mechanisms – including for chips and high-capacity chip manufacturing – protect covered AI technologies with a focus on cybersecurity, and limit outbound investment to firms directly tied to the Chinese Community Party or China's People's Liberation Army. Chinese Bioweapon Smuggling Case Shows Us 'Trains Our Enemies,' 'Learned Nothing' From Covid: Security Expert "We start with the premise that China has a plan to replace the United States. And I don't say that to scare people or my constituents, but they have a plan to replace the United States, and they're working on it every single day. And that entails stealing data and infiltrating our systems," LaHood told Fox News Digital. "AI is the next frontier on that. We lead the world in technology. We lead the world when it comes to AI. But what this bill will do will again make sure that things are done the right way and the correct way, and that we're protecting our assets and promoting the current technology that we have in our country." Read On The Fox News App LaHood pointed to evidence uncovered by the committee that he said shows the CCP's DeepSeek used illegal distillation techniques to steal insights from U.S. AI models to accelerate their own technology development. He also pointed to how China allegedly smuggled AI chips through Singapore intermediaries to circumvent U.S. export controls on the technology. "As we look at, 'How do we win the strategic competition?' I think most experts would say we're ahead in AI right now against China, but not by much. It is a short lead," LaHood told Fox News Digital. He said he is confident his legislative proposals will put the U.S. "in the best position to protect our assets here and make sure that we're not shipping things that shouldn't go to AI that allow them to win the AI race in China." "Whoever wins this race in the future, it's going to be critical to future warfare capabilities, to, obviously, cybersecurity," LaHood continued. "And then, whoever wins the AI competition is going to yield really unwavering economic influence in the future. And so we're aggressive in this bill in terms of targeting those areas where we need to protect our AI and our companies here in the United States, both on the commercial side and on the government side, to put us in the best position possible." The "Advanced AI Security Readiness Act" calls on the NSA to develop a playbook that identifies vulnerabilities in AI data centers and developers producing sensitive AI technologies with an emphasis on unique "threat vectors" that do not typically arise, or are less severe, in the context of conventional information technology systems." The bill says the NSA must develop "core insights" in how advanced AI systems are being trained to identify potential interferences and must develop strategies to "detect, prevent and respond to cyber threats by threat actors targeting covered AI technologies." Amazon Announces $20B Investment In Rural Pennsylvania For Ai Data Centers The bill calls on the NSA to "identify levels of security, if any, that would require substantial involvement" by the U.S. government "in the development or oversight of highly advanced AI systems." It cites a "hypothetical initiative to build covered AI technology systems in a highly secure government environment" with certain protocols in place, such as personnel vetting and security clearance processes, to mitigate "insider threats." Though not directly related, the new bill is being introduced a week after FBI Director Kash Patel sounded the alarm on how the CCP continues to deploy operatives and researchers to "infiltrate" U.S. institutions. Patel laid out the risk in announcing that two Chinese nationals were charged with smuggling a potential bioweapon into the U.S. LaHood said that case further highlights "the level of penetration and sophistication that the CCP will engage in," but he added that his bill focuses on putting a "protective layer" on U.S. AI tech and "restricting outbound investment to China." He pointed to how the CCP also has bought up farmland around strategic U.S. national security locations, particularly in Montana, North Dakota and South Dakota. "If everything was an even playing field, and we were all abiding by the same rules and standards and ethical guidelines, I have no doubt the U.S. would win [the AI race], but China has a tendency and a history of playing by a different set of rules and standards," LaHood said. "They cheat, they steal, they take our intellectual property. Not just my opinion, that's been factually laid out, you know, in many different instances. And that's the reason why we need to have a bill like this." The bill comes as the Trump administration has been pushing to bolster artificial intelligence infrastructure in the United States, and major tech companies, including Amazon, Nvidia, Meta, OpenAI, Oracle and others, have made major investments in constructing AI-focused data centers and enhancing U.S. cloud computing. Last week, Amazon announced a $20 billion investment in constructing AI data centers in rural Pennsylvania. It followed a similar $10 billion investment in North Carolina. In late May, the NSA's Artificial Intelligence Security Center released "joint guidance" on the "risks and best practices in AI data security." The recommendations include implementing methods to secure the data used in AI-based systems, "such as employing digital signatures to authenticate trusted revisions, tracking data provenance, and leveraging trusted infrastructure." The center said its guidance is "critically relevant for organizations – especially system owners and administrators within the Department of Defense, National Security Systems, and the Defense Industrial Base – that already use AI systems in their day-to-day operations and those that are seeking to integrate AI into their infrastructure."Original article source: House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race