
AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years
For the first time, researchers have identified the author and title of a document that's been locked inside a charred scroll for nearly 2,000 years—without peeling back a single layer.
The scroll, PHerc. 172, was recovered from the ruins of Herculaneum, the ancient Roman town buried by the ash and debris of Mount Vesuvius in 79 CE. The scroll is one of three Herculaneum scrolls that now reside at Oxford's Bodleian Libraries.
Thanks to high-resolution scans and some seriously clever machine learning, scholars were able to virtually 'unwrap' the papyrus and read the name inside: On Vices, by the Epicurean philosopher Philodemus.
The treatise—its full name being On Vices and Their Opposite Virtues and In Whom They Are and About What, according to Fine Books Magazine, is basically ancient self-help, exploring how to live a virtuous life by avoiding vice. Philodemus wrote the work in the first century BCE and it is now being read for the first time since it was buried in the devastating volcanic eruption nearly 2,000 years ago.
The discovery—confirmed by multiple research teams—earned the project's collaborators the $60,000 First Title Prize from the Vesuvius Challenge, an open-science competition that's been making ancient texts readable using AI.
In recent years, artificial intelligence has been instrumental in deciphering the ancient, carbonized scrolls from Herculaneum, a Roman town buried by the eruption of Mount Vesuvius in 79. These scrolls, first discovered in the 18th century in what is now known as the Villa of the Papyri, comprise one of the only surviving libraries from the classical world.
Due to their fragile, charred condition, traditional (read: manual) methods of unrolling the scrolls often destroyed them. Now, researchers are using advanced imaging and machine learning to read these texts without ever opening them.
The turning point came in 2015, when scientists used X-ray tomography to read a different ancient scroll from En-Gedi, creating a 3D scan that could be virtually 'unwrapped.' Building on this, researchers at the University of Kentucky developed the Volume Cartographer, a program that uses micro-CT imaging to detect the faint traces of carbon-based ink on the scrolls.
Because the ink contains no metal, unlike many ancient writing materials, a neural network had to be trained to recognize subtle patterns indicating ink on the carbonized papyrus. In 2019, researchers successfully demonstrated this technique, setting the stage for broader applications.
These breakthroughs culminated in the Vesuvius Challenge, launched in 2023 to crowdsource the decoding of unopened scrolls. Participants use AI tools—particularly convolutional neural networks and transformer models—to identify and reconstruct text within the scrolls. In October 2023, the first word ('purple') was read from an unopened scroll, earning a $40,000 prize. The challenge continues, with prizes offered for deciphering additional text and improving the technology.
Brent Seales, a computer scientist at the University of Kentucky and co-founder of the Vesuvius Challenge, told The Guardian that the team's current bottleneck is cleaning, organizing, and enhancing the scan data so that researchers can actually interpret the carbonized ink as text.
Importantly, the digital unwrapping process is guided by human expertise. AI highlights likely areas of ink on the ancient documents, but scholars interpret the patterns to determine if they form coherent words or phrases. The goal is not only to recover lost philosophical texts, many of which are possibly by Epicurus or his followers, but also to establish a scalable system for digitizing and decoding ancient texts—transforming our understanding of the classical world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
5 hours ago
- WIRED
A Deep Learning Alternative Can Help AI Agents Gameplay the Real World
Jun 11, 2025 12:30 PM A new machine learning approach tries to better emulate the human brain, in hopes of creating more capable agentic AI. Photo-Illustration: WIRED Staff; Getty Images A new machine learning approach that draws inspiration from the way the human brain seems to model and learn about the world has proven capable of mastering a number of simple video games with impressive efficiency. The new system, called Axiom, offers an alternative to the artificial neural networks that are dominant in modern AI. Axiom, developed by a software company called Verse AI, is equipped with prior knowledge about the way objects physically interact with each other in the game world. It then uses an algorithm to model how it expects the game to act in response to input, which is updated based on what it observes—a process dubbed active inference. The approach draws inspiration from the free energy principle, a theory that seeks to explain intelligence using principles drawn from math, physics, and information theory as well as biology. The free energy principle was developed by Karl Friston, a renowned neuroscientist who is chief scientist at 'cognitive computing' company Verses. Friston told me over video from his home in London that the approach may be especially important for building AI agents. 'They have to support the kind of cognition that we see in real brains,' he said. 'That requires a consideration, not just of the ability to learn stuff but actually to learn how you act in the world.' The conventional approach to learning to play games involves training neural networks through what is known as deep reinforcement learning, which involves experimenting and tweaking their parameters in response to either positive or negative feedback. The approach can produce superhuman game-playing algorithms but it requires a great deal of experimentation to work. Axiom masters various simplified versions of popular video games called drive, bounce, hunt, and jump using far fewer examples and less computation power. 'The general goals of the approach and some of its key features track with what I see as the most important problems to focus on to get to AGI,' says François Chollet, an AI researcher who developed ARC 3, a benchmark designed to test the capabilities of modern AI algorithms. Chollet is also exploring novel approaches to machine learning, and is using his benchmark to test models' abilities to learn how to solve unfamiliar problems rather than simply mimic previous examples. 'The work strikes me as very original, which is great,' he says. 'We need more people trying out new ideas away from the beaten path of large language models and reasoning language models.' Modern AI relies on artificial neural networks that are roughly inspired by the wiring of the brain but work in a fundamentally different way. Over the past decade and a bit, deep learning, an approach that uses neural networks, has enabled computers to do all sorts of impressive things including transcribe speech, recognize faces, and generate images. Most recently, of course, deep learning has led to the large language models that power garrulous and increasingly capable chatbots. Axiom, in theory, promises a more efficient approach to building AI from scratch. It might be especially effective for creating agents that need to learn efficiently from experience, says Gabe René, the CEO of Verses. René says one finance company has begun experimenting with the company's technology as a way of modeling the market. 'It is a new architecture for AI agents that can learn in real time and is more accurate, more efficient, and much smaller,' René says. 'They are literally designed like a digital brain.' Somewhat ironically, given that Axiom offers an alternative to modern AI and deep learning, the free energy principle was originally influenced by the work of British Canadian computer scientist Geoffrey Hinton, who was awarded both the Turing award and the Nobel Prize for his pioneering work on deep learning. Hinton was a colleague of Friston's at University College London for years. For more on Friston and the free energy principle, I highly recommend this 2018 WIRED feature article. Friston's work also influenced an exciting new theory of consciousness, described in a book WIRED reviewed in 2021.


Gizmodo
6 hours ago
- Gizmodo
After Slashing Thousands of Jobs, Trump's FDA Wants to Use AI to Rapidly Approve New Drugs
AI is slowly permeating all corners of the federal government, including the Food and Drug Administration, where, according to a newly released paper, its top brass now wants to use automation to more 'efficiently' approve new drugs. An article recently published in the Journal of the American Medical Association (JAMA) by Dr. Vinay Prasad, the FDA's director of a subagency that deals with vaccines, lays out a vision for revamping the agency that will supposedly 'increase efficiency' at the agency that regulates what you eat and drink. According to that article, a big way to make the agency more efficient is to use AI to do tasks that humans previously worked on. Specifically, it suggests using automation to speed up the drug approval process. 'The advent of generative artificial intelligence (AI) holds several promises to modernize the FDA and radically increase efficiency in the review process,' the paper reads, while noting that the agency has already implemented a pilot program that involves first 'AI-assisted scientific review.' The article also speaks of a need to 'reevaluate legacy processes at the agency that slow down decisions and do not increase safety.' The study also claims it is looking to find ways to use technology to avoid 'animal cruelty' at the agency. It has supposedly done this by developing 'a road map to reduce animal testing using AI-based computational modeling to predict toxicity-leveraging chip technology.' All of this news comes not long after the FDA purged thousands of staffers from its ranks, including those responsible for reviewing food safety. Now, in what has become a typical pivot for organizations looking to integrate AI, roles previously held by humans seem like they're being automated. The article also suggests the use of 'big data' to help better assess how drug products are developed and reviewed. 'In the past, randomized clinical trials were the sole method used to determine if a product was safe and effective,' the article reads. 'Advances in causal inference in nonrandomized data, including the use of target trials, which attempt to balance confounding and time zero, have [the] potential to yield actionable causal conclusions, in many cases at lower cost.' AI has been spread throughout other parts of the government, as the administration's supposed 'efficiency' mandate looks for newfangled methods to 'streamline' bureaucratic processes. If AI could technically help speed up some bureaucratic processes, a quick look at the way automation rollout is being handled at other agencies doesn't necessarily inspire confidence in the initiative, particularly when it comes to an agency tasked with overseeing drugs that go into Americans' bodies. When it comes to new drugs, there are always guinea pigs involved in the process; now AI's impact will have to be factored into the effectiveness of the latest tests.


CNET
7 hours ago
- CNET
Meta Says Its New AI Model Can Understand the Physical World
Meta says a new generative AI model it released Wednesday could change how machines understand the physical world, opening up opportunities for smarter robots and more. The new open-source model, called V-JEPA 2 for Video Joint Embedding Predictive Architecture 2, is designed to help AI understand things like gravity and object permanence, Meta said. Current models that allow AI to interact with the physical world rely on labelled data or video to mimic reality, but this approach emphasizes the logic of the physical world, including how objects move and interact. The model could allow AI to understand concepts like the fact that a ball rolling off of a table will fall. Meta said the model could be useful for devices like autonomous vehicles and robots by ensuring they don't need to be trained on every possible situation. The company called it a step toward AI that can adapt like humans can. One struggle in the space of physical AI has been the need for significant amounts of training data, which takes time, money and resources. At SXSW earlier this year, experts said synthetic data -- training data created by AI -- could help prepare a more traditional learning model for unexpected situations. (In Austin, the example used was the emergence of bats from the city's famed Congress Avenue Bridge.) Meta said its new model simplifies the process and makes it more efficient for real-world applications because it doesn't rely on all of that training data.