
AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years
For the first time, researchers have identified the author and title of a document that's been locked inside a charred scroll for nearly 2,000 years—without peeling back a single layer.
The scroll, PHerc. 172, was recovered from the ruins of Herculaneum, the ancient Roman town buried by the ash and debris of Mount Vesuvius in 79 CE. The scroll is one of three Herculaneum scrolls that now reside at Oxford's Bodleian Libraries.
Thanks to high-resolution scans and some seriously clever machine learning, scholars were able to virtually 'unwrap' the papyrus and read the name inside: On Vices, by the Epicurean philosopher Philodemus.
The treatise—its full name being On Vices and Their Opposite Virtues and In Whom They Are and About What, according to Fine Books Magazine, is basically ancient self-help, exploring how to live a virtuous life by avoiding vice. Philodemus wrote the work in the first century BCE and it is now being read for the first time since it was buried in the devastating volcanic eruption nearly 2,000 years ago.
The discovery—confirmed by multiple research teams—earned the project's collaborators the $60,000 First Title Prize from the Vesuvius Challenge, an open-science competition that's been making ancient texts readable using AI.
In recent years, artificial intelligence has been instrumental in deciphering the ancient, carbonized scrolls from Herculaneum, a Roman town buried by the eruption of Mount Vesuvius in 79. These scrolls, first discovered in the 18th century in what is now known as the Villa of the Papyri, comprise one of the only surviving libraries from the classical world.
Due to their fragile, charred condition, traditional (read: manual) methods of unrolling the scrolls often destroyed them. Now, researchers are using advanced imaging and machine learning to read these texts without ever opening them.
The turning point came in 2015, when scientists used X-ray tomography to read a different ancient scroll from En-Gedi, creating a 3D scan that could be virtually 'unwrapped.' Building on this, researchers at the University of Kentucky developed the Volume Cartographer, a program that uses micro-CT imaging to detect the faint traces of carbon-based ink on the scrolls.
Because the ink contains no metal, unlike many ancient writing materials, a neural network had to be trained to recognize subtle patterns indicating ink on the carbonized papyrus. In 2019, researchers successfully demonstrated this technique, setting the stage for broader applications.
These breakthroughs culminated in the Vesuvius Challenge, launched in 2023 to crowdsource the decoding of unopened scrolls. Participants use AI tools—particularly convolutional neural networks and transformer models—to identify and reconstruct text within the scrolls. In October 2023, the first word ('purple') was read from an unopened scroll, earning a $40,000 prize. The challenge continues, with prizes offered for deciphering additional text and improving the technology.
Brent Seales, a computer scientist at the University of Kentucky and co-founder of the Vesuvius Challenge, told The Guardian that the team's current bottleneck is cleaning, organizing, and enhancing the scan data so that researchers can actually interpret the carbonized ink as text.
Importantly, the digital unwrapping process is guided by human expertise. AI highlights likely areas of ink on the ancient documents, but scholars interpret the patterns to determine if they form coherent words or phrases. The goal is not only to recover lost philosophical texts, many of which are possibly by Epicurus or his followers, but also to establish a scalable system for digitizing and decoding ancient texts—transforming our understanding of the classical world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
an hour ago
- Associated Press
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. A research team led by Prof. Changwen Chen, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, has developed a novel video-language agent VideoMind that allows AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-LoRA strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, 'Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.' AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, 'VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.' Hashtag: #PolyU #AI #LLMs #VideoAnalysis #IntelligentSurveillance #VideoSearch The issuer is solely responsible for the content of this announcement.


Washington Post
3 hours ago
- Washington Post
Dan David Prize names 9 historians as winners of 2025 award
JERUSALEM — The Dan David Prize will award nine historians, archaeologists and filmmakers from around the world with $300,000 for their research, the foundation announced Tuesday. The award winners are researching a vast array of topics, from the notebook of Isaac Newton's roommate to the history and culture of Ethiopian Jews. The academics recently received the prize during a ceremony in Italy.


Medscape
4 hours ago
- Medscape
Lifestyle Changes Boost Medical Therapy for CAD
In patients with chronic stable coronary artery disease (CAD), integrating intensive lifestyle modifications and goal-directed medical therapy — while reserving revascularization for those with severely reduced coronary flow capacity — resulted in improvements in risk factor scores and better clinical outcomes. METHODOLOGY: Researchers in Texas conducted a single-center randomized trial to evaluate the efficacy of combining lifestyle modifications with aggressive medical therapy in patients with subclinical, suspected, or established CAD and factors that put them at a high risk for poor outcomes. Between 2009 and 2017, patients aged 40 years or older were randomly assigned to receive either comprehensive care (n = 513; mean age, 61 years; 67% men) or standard care (n = 515; mean age, 61 years; 69% men). After randomization, all patients underwent baseline stress-rest PET to quantify coronary flow capacity and accordingly defer or guide interventions. Comprehensive care involved intensive lifestyle counseling, regular review of PET results, and targeted steps toward prespecified risk factor goals. Patients also received frequent follow-ups and round-the-clock access to phone or email support. Those without severely reduced coronary flow capacity were managed without invasive interventions. Standard care involved no review of results or contact for support, and PET results were unblinded only for patients with severely reduced coronary flow capacity at a high risk for mortality to consider potential revascularization. The primary outcome was a change in the summed risk score of 16 individual risk factors over a 5-year follow-up period. Major adverse cardiac events, their components, and revascularization after 90 days were assessed as secondary outcomes. TAKEAWAY: At 5 years, patients receiving comprehensive care had a lower summed risk score than those receiving standard care (difference in 5-year change, -1.4; P < .0001), along with significant improvements in individual risk factors such as low-density lipoprotein, BMI, and blood pressure ( P < .01 for all). < .0001), along with significant improvements in individual risk factors such as low-density lipoprotein, BMI, and blood pressure ( < .01 for all). Over 11 years of extended follow-up, the comprehensive care group had 31.4% fewer major adverse cardiac events, 42.7% fewer deaths, 37% fewer deaths or myocardial infarction events, and 35.1% fewer revascularizations than the standard care group ( P < .05 for all). < .05 for all). Only 5.4% of patients underwent revascularization within 90 days, which was primarily guided by the severity of coronary flow capacity. IN PRACTICE: 'The randomized, controlled, blinded 5-year CENTURY trial demonstrates that participants for whom invasive coronary procedures were safely deferred based on [coronary flow capacity] by PET, integrated with comprehensive, intense lifestyle modifications, and aggressive medical treatment targeted to goals significantly improved all risk factor scores with significant reduction in all-cause mortality,' the researchers reported. 'Review of risk factor data, PET images, and frequent supportive participant contact with CENTURY [trial] research staff appeared to enhance adherence of the comprehensive compared with the standard care group,' they added. SOURCE: This study was led by K. Lance Gould, MD, of the University of Texas McGovern Medical School in Houston. It was published online on May 29, 2025, in European Heart Journal . LIMITATIONS: This study was conducted at a single center. About 23% of patients did not participate in the follow-up visits or PET scans. Blinding coronary artery calcium and myocardial perfusion images in patients receiving standard care might have been a source of bias. DISCLOSURES: This study received support from the Weatherhead PET Centre endowment at the University of Texas-Houston. One author reported receiving internal funding from the funding source and being an applicant for FDA-cleared K231731 PET software. One author reported serving as the principal investigator of a trial which receives support from a medical device company. Several authors reported donating any personal honoraria or waiving off their rights to royalties to avoid conflicts of interest.