
AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years
For the first time, researchers have identified the author and title of a document that's been locked inside a charred scroll for nearly 2,000 years—without peeling back a single layer.
The scroll, PHerc. 172, was recovered from the ruins of Herculaneum, the ancient Roman town buried by the ash and debris of Mount Vesuvius in 79 CE. The scroll is one of three Herculaneum scrolls that now reside at Oxford's Bodleian Libraries.
Thanks to high-resolution scans and some seriously clever machine learning, scholars were able to virtually 'unwrap' the papyrus and read the name inside: On Vices, by the Epicurean philosopher Philodemus.
The treatise—its full name being On Vices and Their Opposite Virtues and In Whom They Are and About What, according to Fine Books Magazine, is basically ancient self-help, exploring how to live a virtuous life by avoiding vice. Philodemus wrote the work in the first century BCE and it is now being read for the first time since it was buried in the devastating volcanic eruption nearly 2,000 years ago.
The discovery—confirmed by multiple research teams—earned the project's collaborators the $60,000 First Title Prize from the Vesuvius Challenge, an open-science competition that's been making ancient texts readable using AI.
In recent years, artificial intelligence has been instrumental in deciphering the ancient, carbonized scrolls from Herculaneum, a Roman town buried by the eruption of Mount Vesuvius in 79. These scrolls, first discovered in the 18th century in what is now known as the Villa of the Papyri, comprise one of the only surviving libraries from the classical world.
Due to their fragile, charred condition, traditional (read: manual) methods of unrolling the scrolls often destroyed them. Now, researchers are using advanced imaging and machine learning to read these texts without ever opening them.
The turning point came in 2015, when scientists used X-ray tomography to read a different ancient scroll from En-Gedi, creating a 3D scan that could be virtually 'unwrapped.' Building on this, researchers at the University of Kentucky developed the Volume Cartographer, a program that uses micro-CT imaging to detect the faint traces of carbon-based ink on the scrolls.
Because the ink contains no metal, unlike many ancient writing materials, a neural network had to be trained to recognize subtle patterns indicating ink on the carbonized papyrus. In 2019, researchers successfully demonstrated this technique, setting the stage for broader applications.
These breakthroughs culminated in the Vesuvius Challenge, launched in 2023 to crowdsource the decoding of unopened scrolls. Participants use AI tools—particularly convolutional neural networks and transformer models—to identify and reconstruct text within the scrolls. In October 2023, the first word ('purple') was read from an unopened scroll, earning a $40,000 prize. The challenge continues, with prizes offered for deciphering additional text and improving the technology.
Brent Seales, a computer scientist at the University of Kentucky and co-founder of the Vesuvius Challenge, told The Guardian that the team's current bottleneck is cleaning, organizing, and enhancing the scan data so that researchers can actually interpret the carbonized ink as text.
Importantly, the digital unwrapping process is guided by human expertise. AI highlights likely areas of ink on the ancient documents, but scholars interpret the patterns to determine if they form coherent words or phrases. The goal is not only to recover lost philosophical texts, many of which are possibly by Epicurus or his followers, but also to establish a scalable system for digitizing and decoding ancient texts—transforming our understanding of the classical world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Vogue
an hour ago
- Vogue
Can AI Replace Therapists? And More Importantly, Should It?
'Can machines think?' It's a question that mathematician Alan Turing first posed in 1950 and became the cornerstone of his experiment, known as 'The Turing Test,' in which a human and a machine are presented with the same dilemma. If the machine could imitate human behavior, it was considered intelligent, something Turing predicted would increasingly happen in the decades to come. He didn't have to wait long: by the 1960s, MIT professor Joseph Weizenbaum had introduced the world to ELIZA, the first chatbot and forebearer of modern AI—and ELIZA was programmed to imitate a psychotherapist. But Turing's question feels more prescient than ever now, as we find ourselves at a disconcerting crossroads with technology advancing and extending its reach into the various touchpoints of our lives at a rate so quick that the guardrails haven't yet been created to corral it. In 2025, Turing's initial question has evolved into something different though: Can machines feel or understand feelings? Because, as increasing numbers of people turn toward AI in lieu of a human therapist, we are asking them to do just that. The technology has indeed come a long way since ELIZA. Now, you have options like Pi, which bills itself as 'your personal AI, designed to be supportive, smart, and there for you anytime.' Or Replika, 'which is always here to listen and talk.' There's also Woebot, and Earkick, and Wysa, and Therabot, the list goes on if you're just looking for someone—well, thing—to talk to. Some of these chatbots have been developed with the help of mental health professionals, and more importantly some haven't, and it's hard for the average client to discern which is which. One reason that more people are turning to AI for mental health help is cost—sessions with a human therapist (whether virtual or in-person) can be pricey and are often either not covered by insurance or require a lot of extra effort to navigate whether they will be covered. For younger generations, recession-proofing their budget has meant ditching a real therapist for a bot stand-in. Then there's the lingering stigma around seeking out mental health help. 'Many families, whether it be because of culture or religion or just ingrained beliefs, are passing down stigmatized views about therapy and mental health through generations,' says Brigid Donahue, a licensed clinical social worker and EMDR therapist in L.A. And there's the convenience factor: this new wave of mental health tools are available on your schedule (in fact, that's Woebot's tagline). 'Your AI therapist will never go on vacation, never call out or cancel a session, they're available 24/7,' says Vienna Pharaon, a marriage and family therapist and author of The Origins of You. 'It creates this perfect experience where you'll never be let down. 'But the truth is you don't heal through perfection.' That healing often comes with the ruptures, friction, and tension of a therapy session that isn't automated. 'When you eliminate imperfection and human flaws and the natural disappointments that will occur, we really rob clients of the experience of moving through challenges and conflicts,' says Pharaon. The so-called imperfections of a human therapist can actually be reassuring for many clients. 'For anyone who grew up with the intense pressure to be perfect, 'mistakes' made by a therapist can actually be corrective,' adds Donahue.

Associated Press
2 hours ago
- Associated Press
PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis
HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. A research team led by Prof. Changwen Chen, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, has developed a novel video-language agent VideoMind that allows AI models to perform long video reasoning and question-answering tasks by emulating humans' way of thinking. The VideoMind framework incorporates an innovative Chain-of-LoRA strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, 'Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.' AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, 'VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.' Hashtag: #PolyU #AI #LLMs #VideoAnalysis #IntelligentSurveillance #VideoSearch The issuer is solely responsible for the content of this announcement.


Washington Post
5 hours ago
- Washington Post
Dan David Prize names 9 historians as winners of 2025 award
JERUSALEM — The Dan David Prize will award nine historians, archaeologists and filmmakers from around the world with $300,000 for their research, the foundation announced Tuesday. The award winners are researching a vast array of topics, from the notebook of Isaac Newton's roommate to the history and culture of Ethiopian Jews. The academics recently received the prize during a ceremony in Italy.