logo
AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years

AI Identifies Author of Charred Scroll Buried by Vesuvius for 2,000 Years

Gizmodo06-05-2025

For the first time, researchers have identified the author and title of a document that's been locked inside a charred scroll for nearly 2,000 years—without peeling back a single layer.
The scroll, PHerc. 172, was recovered from the ruins of Herculaneum, the ancient Roman town buried by the ash and debris of Mount Vesuvius in 79 CE. The scroll is one of three Herculaneum scrolls that now reside at Oxford's Bodleian Libraries.
Thanks to high-resolution scans and some seriously clever machine learning, scholars were able to virtually 'unwrap' the papyrus and read the name inside: On Vices, by the Epicurean philosopher Philodemus.
The treatise—its full name being On Vices and Their Opposite Virtues and In Whom They Are and About What, according to Fine Books Magazine, is basically ancient self-help, exploring how to live a virtuous life by avoiding vice. Philodemus wrote the work in the first century BCE and it is now being read for the first time since it was buried in the devastating volcanic eruption nearly 2,000 years ago.
The discovery—confirmed by multiple research teams—earned the project's collaborators the $60,000 First Title Prize from the Vesuvius Challenge, an open-science competition that's been making ancient texts readable using AI.
In recent years, artificial intelligence has been instrumental in deciphering the ancient, carbonized scrolls from Herculaneum, a Roman town buried by the eruption of Mount Vesuvius in 79. These scrolls, first discovered in the 18th century in what is now known as the Villa of the Papyri, comprise one of the only surviving libraries from the classical world.
Due to their fragile, charred condition, traditional (read: manual) methods of unrolling the scrolls often destroyed them. Now, researchers are using advanced imaging and machine learning to read these texts without ever opening them.
The turning point came in 2015, when scientists used X-ray tomography to read a different ancient scroll from En-Gedi, creating a 3D scan that could be virtually 'unwrapped.' Building on this, researchers at the University of Kentucky developed the Volume Cartographer, a program that uses micro-CT imaging to detect the faint traces of carbon-based ink on the scrolls.
Because the ink contains no metal, unlike many ancient writing materials, a neural network had to be trained to recognize subtle patterns indicating ink on the carbonized papyrus. In 2019, researchers successfully demonstrated this technique, setting the stage for broader applications.
These breakthroughs culminated in the Vesuvius Challenge, launched in 2023 to crowdsource the decoding of unopened scrolls. Participants use AI tools—particularly convolutional neural networks and transformer models—to identify and reconstruct text within the scrolls. In October 2023, the first word ('purple') was read from an unopened scroll, earning a $40,000 prize. The challenge continues, with prizes offered for deciphering additional text and improving the technology.
Brent Seales, a computer scientist at the University of Kentucky and co-founder of the Vesuvius Challenge, told The Guardian that the team's current bottleneck is cleaning, organizing, and enhancing the scan data so that researchers can actually interpret the carbonized ink as text.
Importantly, the digital unwrapping process is guided by human expertise. AI highlights likely areas of ink on the ancient documents, but scholars interpret the patterns to determine if they form coherent words or phrases. The goal is not only to recover lost philosophical texts, many of which are possibly by Epicurus or his followers, but also to establish a scalable system for digitizing and decoding ancient texts—transforming our understanding of the classical world.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'
OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Yahoo

time3 hours ago

  • Yahoo

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

OpenAI cofounder Ilya Sutskever says "the day will come when AI will do all the things that we can." He spoke about the state of AI at the University of Toronto convocation last week. Sutskever also advised graduates to "'accept reality as it is and try not to regret the past." Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can. Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week. "The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said. Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things." But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do." "How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer." As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation." "It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'" Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman. Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications." A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO. "I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI." Altman was brought back as CEO the same month. Sutskever left OpenAI six months later and started a research lab focused on building "safe superintelligence." Read the original article on Business Insider

Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business

Forbes

time3 hours ago

  • Forbes

Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business

As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'
OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Business Insider

time3 hours ago

  • Business Insider

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can. Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week. "The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said. Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things." But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do." "How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer." As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation." "It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'" Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman. Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications." A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO. "I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store