
Dead Sea Scroll breakthrough: AI analysis proves the ancient manuscripts are even OLDER than we thought
The Dead Sea Scrolls are surely among the most historically and biblically important objects known to humankind.
Found in caves near the Dead Sea nearly 100 years ago, these ancient manuscripts have transformed understanding of Jewish and Christian origins.
Penned upon the 1,000 scrolls were profound religious texts, hymns, prayers, legal codes, commentaries and more.
Until now, the scrolls have been assumed to date somewhere between the third century BC and the first century AD.
But according to a new AI analysis, some of the scrolls date back as far as the fourth century BC – nearly 2,500 years ago.
While the experts only analysed about a tenth of all the Dead Sea Scrolls, they hope AI could continue to reveal more about their history.
'Often individual manuscript dates are older than previously thought,' lead study author Professor Mladen Popović at the University of Groningen told MailOnline.
'The strength and significance of the AI tool that we have developed is that it makes it possible to provide much more accurate date estimates.'
First found in 1946 in the Qumran Caves of the Judaean Desert, near the Dead Sea, the scrolls comprise around 1,000 ancient manuscripts in thousands of fragments.
The Dead Sea Scrolls were made of parchment from processed animal hide known as vellum, but also plant matter called papyrus and sheets of metal.
The text upon them was penned in four languages – Hebrew, Aramaic, Greek, and Nabataean (language of the ancient Arab Nabatean people).
But their religious significance is largely because the scrolls include some early Hebrew texts that would go on to form part of the Old Testament of the Bible.
Little has been known about the scribe or scribes who laboured to produce the individual scrolls – as the works were not signed.
It is sometimes possible to determine the approximate age of undated manuscripts by evaluating handwriting – but to use this method, there needs to be enough manuscripts with accurate dates to create a reliable timeline of handwriting styles.
For the study, researchers used radiocarbon dating to work out the age of historic manuscripts from various sites in modern-day Israel and the West Bank.
They then then used a previously-developed AI neural network called BiNet to study the handwriting styles of each document, right down to the ink-trace patterns.
How old are the Dead Sea Scrolls?
Until now, the scrolls have been assumed to date somewhere between the third century BC and the first century AD.
But according to the study, some of the scrolls date back as far as the fourth century BC - nearly 2,500 years ago.
The ancient manuscripts that were discovered between 1947 and 1956 in 11 caves near Khirbet Qumran, on the northwestern shores of the Dead Sea.
By pairing those two datasets together, the team could create an AI program called 'Enoch' that could use the handwriting style of other manuscripts from the region to objectively determine an approximate age range – including the Dead Sea Scrolls.
To test the program, ancient handwriting experts evaluated Enoch's age estimates for 135 of the Dead Sea Scrolls.
The experts determined that 79 per cent of the AI's estimates were 'realistic', while the remaining 21 per cent were determined too old, too young or indecisive.
Crucially, both the AI and radiocarbon dating methods estimated older ages for many of the Dead Sea Scrolls than did traditional handwriting analysis.
'In general, the date predictions by Enoch for individual manuscripts moves within the timeframe of late fourth century BC until second century AD,' Professor Popović told MailOnline.
'But within that time frame more manuscripts are now older, being dated to the first half of the second century BC, the third century BC and in two cases even into the late fourth century BC.'
Professor Popović said his team 'have not yet dated all manuscripts', so potentially more is to be learned about the overall age of the scrolls.
'There are more than 1,000 Dead Sea Scrolls so our study is a first but significant step, opening a door unto history with new possibilities for research,' he said.
Until now, the dating of individual manuscripts was mostly based on 'palaeography' – the study of ancient handwriting alone.
But the new study, published in the journal PLOS One, combines palaeography with AI and radiocarbon dating to get a fuller picture.
Enoch also gives researchers a powerful new tool that can refine estimates for specific manuscripts, often to an accuracy of only plus or minus 50 years.
The authors say: 'With the Enoch tool we have opened a new door into the ancient world, like a time machine, that allows us to study the hands that wrote the Bible.
'It is very exciting to set a significant step into solving the dating problem of the Dead Sea Scrolls and also creating a new tool that could be used to study other partially dated manuscript collections from history.'
The Dead Sea Scrolls were discovered between 1946 and 1956 and date back 2,000 years
Discovered between 1946 and 1956, the Dead Sea Scrolls are a collection of ancient manuscripts dating back to 2,000 years ago - or more.
The texts include tends of thousands of parchment and papyrus fragments and in rare cases entire manuscripts.
They contain parts of what is now known as the Hebrew Bible as well as a range of extra-biblical documents.
The scrolls were found by shepherd Muhammed Edh-Dhib as he searched for a stray among the limestone cliffs at Khirbet Qumran on the shores of the Dead Sea in what was then British Mandate Palestine - now the West Bank.
The story goes that in a cave in the dark crevice of a steep rocky hillside, Muhammed hurled a stone into the dark interior and was startled to hear the sound of breaking pots.
Venturing inside, the young Bedouin found a mysterious collection of large clay jars in which he found old scrolls, some wrapped in linen and blackened with age.
The texts have since been excavated by archaeologists, who are now racing to digitise their contents before they deteriorate beyond legibility.
The texts are of great historical and religious significance and include the earliest known surviving copies of biblical and extra-biblical documents, as well as preserving evidence of diversity in late Second Temple Judaism.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
18 minutes ago
- The Guardian
Cutting-edge AI models ‘collapse' in face of complex problems, Apple study finds
Apple researchers have found 'fundamental limitations' in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry's race to develop ever more powerful systems. Apple claimed in a paper published at the weekend that large reasoning models (LRMs) – an advanced form of AI – faced a 'complete accuracy collapse' when presented with highly complex problems. It found that standard AI models outperformed LRMs in low-complexity tasks, while both types of model suffered 'complete collapse' with high-complexity tasks. Large reasoning models attempt to solve complex queries by generating detailed thinking processes that break down the problem into smaller steps. The study, which tested the models' ability to solve puzzles, added that as LRMs neared performance collapse they began 'reducing their reasoning effort'. The Apple researchers said they found this 'particularly concerning'. Gary Marcus, a US academic who has become a prominent voice of caution on the capabilities of AI models, described the Apple paper as 'pretty devastating'. Marcus added that the findings raised questions about the race to artificial general intelligence (AGI), a theoretical stage of AI at which a system is able to match a human at carrying out any intellectual task. Referring to the large language models [LLMs] that underpin tools such as ChatGPT, Marcus wrote: 'Anybody who thinks LLMs are a direct route to the sort [of] AGI that could fundamentally transform society for the good is kidding themselves.' The paper also found that reasoning models wasted computing power by finding the right solution for simpler problems early in their 'thinking'. However, as problems became slightly more complex, models first explored incorrect solutions and arrived at the correct ones later. For higher-complexity problems, however, the models would enter 'collapse', failing to generate any correct solutions. In one case, even when provided with an algorithm that would solve the problem, the models failed. The paper said: 'Upon approaching a critical threshold – which closely corresponds to their accuracy collapse point – models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty.' The Apple experts said this indicated a 'fundamental scaling limitation in the thinking capabilities of current reasoning models'. The paper set the LRMs puzzle challenges, such as solving the Tower of Hanoi and River Crossing puzzles. The researchers acknowledged that the focus on puzzles represented a limitation in its work. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion The paper concluded that the current approach to AI may have reached limitations. It tested models including OpenAI's o3, Google's Gemini Thinking, Anthropic's Claude 3.7 Sonnet-Thinking and DeepSeek-R1. Anthropic, Google and DeepSeek have been contacted for comment. OpenAI, the company behind ChatGPT, declined to comment. Referring to 'generalizable reasoning' – or an AI model's ability to apply a narrow conclusion more broadly – the paper said: 'These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.' Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the Apple paper signalled the industry was 'still feeling its way' on AGI and that the industry could have reached a 'cul de sac' in its current approach. 'The finding that large reason models lose the plot on complex problems, while performing well on medium- and low-complexity problems implies that we're in a potential cul-de-sac in current approaches,' he said.


Reuters
22 minutes ago
- Reuters
Qualcomm bypasses logic gate with $2 bln UK deal
LONDON, June 9 (Reuters Breakingviews) - Artificial intelligence is so fast-moving and potentially all-encompassing that it makes sense for technology giants to pay up just to stay relevant. Look at all the money Microsoft (MSFT.O), opens new tab and (AMZN.O), opens new tab have poured into large language model developers OpenAI and Anthropic. That helps to explain why $162 billion semiconductor behemoth Qualcomm (QCOM.O), opens new tabagreed, opens new tab to buy $2.4 billion UK-based Alphawave (AWE.L), opens new tab on Monday. Overpaying to have a chance at cracking the massive AI data centre market is worthwhile for CEO Cristiano Amon. The deal has been a while in the making. Reuters reported in April that $140 billion chip architecture licenser Arm had considered buying Alphawave, which has a range of intellectual property related to semiconductors, with the aim of developing AI processors. That deal didn't happen, but Qualcomm's now has. The two sides agreed a price that implies a 96% premium to Alphawave's March 31 undisturbed level. Amon is giving the target's investors the option of receiving cash, shares or a special new security that will eventually turn into Qualcomm stock. The latter consideration seems designed to allow Alphawave's founders, who own a special type of share, to participate on the same economic terms as everyone else. On the surface, the price looks punchy. The offered enterprise value is $2.4 billion, according to a joint statement. With analysts pencilling in just $46 million of EBITDA this year for Alphawave, based on estimates gathered by LSEG, the implied multiple exceeds 50, compared with about 35 on average for peers Marvell Technology (MRVL.O), opens new tab, Credo Technology (CRDO.O), opens new tab, InterDigital (IDCC.O), opens new tab and Astera Labs (ALAB.O), opens new tab. Still, Amon isn't buying Alphawave for near-term earnings. The deal gives Qualcomm a shot at growing into the massive data-centre market, which is all the more important because the company needs to diversify away from smartphones. Key customer Apple is seeking to reduce its reliance on Qualcomm kit for iPhones. The market for AI data centre central processing units – or CPUs, which are like the brain of a computing system – is growing at a handy 10% compound annual rate, according to figures cited by JPMorgan analysts. Marvell, Broadcom (AVGO.O), opens new tab and others are already making moves and laying claim to turf that Qualcomm might like to own. Alphawave's high-speed intellectual property helps connect chips and move data around quickly, which could boost Amon's product offering as he seeks to build on the recent success of signing up a major Saudi Arabian data-centre customer. The good news is that this is a manageable bet relative to Qualcomm's size. The outlay is equivalent to less than one quarter's free cash flow, according to forecasts gathered by Visible Alpha. Given that, and the chance to unlock the next leg of growth, Amon arguably can't afford not to try. Follow Karen Kwok on LinkedIn, opens new tab and X, opens new tab.


Geeky Gadgets
30 minutes ago
- Geeky Gadgets
AI Literature Review Tools Exposed: Which Ones Are Lying to You?
What if the tool you trusted to streamline your research was quietly feeding you lies? In the race to harness AI for academic productivity, literature review tools promise to save hours of tedious work. But here's the catch: not all of them tell the truth. Imagine submitting a paper only to discover that 1 in 4 of your references is fabricated. That's the reality I uncovered while testing three popular AI-powered tools—Manis, Gen Spark, and Gemini AI. The results? Eye-opening. Only one of them delivered the accuracy and reliability essential for serious research, while the others left me questioning their place in academic workflows. If you've ever wondered whether AI can truly be trusted with your literature reviews, this rundown might surprise you. Andy Stapleton breaks down the performance of these tools based on speed, usability, and—most critically—accuracy. You'll discover which AI tool churned out a 61-page report with near-perfect references, and which one sacrificed credibility for speed. Whether you're a researcher seeking to save time or just curious about the limits of AI in academia, this comparison will help you navigate the trade-offs. By the end, you'll know which tool is worth your trust—and which might lead you astray. Because when it comes to academic integrity, the stakes are too high for guesswork. AI Literature Review Tools Manis: Speed Over Accuracy Manis demonstrated impressive speed, completing a literature review in just three minutes. It generated a 14-page document with 38 references, making it an appealing option for researchers who prioritize efficiency. However, its accuracy raised concerns. Approximately 16% of the references were either fabricated or inaccurate, posing a risk to the credibility of any research relying on its output. Key Strengths: Exceptional processing speed (3 minutes). Organized research themes for easier navigation. Downloadable PDF format for immediate use. Key Weaknesses: Moderate fabrication rate (16%). Repetition and inaccuracies in references. Manis is a viable option for generating quick overviews, but its reliability is compromised by the need for thorough manual verification. While its speed is a clear advantage, the trade-off in accuracy limits its utility for rigorous academic research. Gen Spark: A Balanced but Limited Option Gen Spark offered a more balanced approach, completing the task in 5-7 minutes. It produced 19 references and demonstrated a reasonable understanding of the research prompt. However, its fabrication rate was higher at 26%, and its limited output made it less suitable for in-depth academic projects. Key Strengths: Moderate processing time (5-7 minutes). Reasonable comprehension of research prompts. Key Weaknesses: High fabrication rate (26%). Limited number of references (19). Output format is less user-friendly compared to competitors. Gen Spark may serve as a starting point for preliminary research, but its higher error rate and limited scope make it less dependable for detailed academic work. Researchers seeking comprehensive and accurate results may find its limitations restrictive. I Tested 3 Literature Review AIs – Only One Didn't Lie to Me Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in AI Literature Review Tools. Gemini AI: The Benchmark for Reliability Gemini AI emerged as the most reliable tool among the three tested. While it required the longest processing time—20 minutes—it delivered a 61-page document with 105 references. Only 1% of these references were problematic, and the issues were related to accessibility rather than outright fabrication. Gemini AI also stood out for its inclusion of structured data, tables, and up-to-date references, providing a level of detail unmatched by the other tools. Key Strengths: Extensive output (61 pages, 105 references). Minimal inaccuracies (1%). Inclusion of tables and structured data for clarity. Key Weaknesses: Longest processing time (20 minutes). Does not strictly adhere to peer-reviewed sources. Lacks integration with reference management tools. For researchers who value accuracy and depth, Gemini AI is the most dependable choice. While its longer processing time requires patience, its detailed output and low error rate make it a standout tool for academic literature reviews. Final Assessment After evaluating all three tools, Gemini AI clearly stands out as the most reliable option for academic literature reviews. Its detailed output, minimal error rate, and comprehensive analysis set it apart, despite its longer processing time. Manis, with its speed and moderate accuracy, is a reasonable alternative for quick overviews, while Gen Spark falls short due to its higher fabrication rate and limited scope. Final Rankings: First Place: Gemini AI for its depth, accuracy, and comprehensive output. Gemini AI for its depth, accuracy, and comprehensive output. Second Place: Manis for its speed and relatively low fabrication rate. Manis for its speed and relatively low fabrication rate. Third Place: Gen Spark due to its higher inaccuracy and limited scope. Practical Insights for Researchers AI tools for literature reviews hold significant potential, but they are not without flaws. Regardless of the tool you choose, manual verification remains essential to ensure the accuracy and credibility of your references. Among the tested options, Gemini AI sets the standard for academic productivity, offering a balance of precision and thoroughness that researchers can trust. While Manis and Gen Spark have their merits, they fall short of the reliability and depth required for rigorous academic work. Researchers should weigh their priorities—whether speed, accuracy, or comprehensiveness—when selecting the right tool for their needs. Media Credit: Andy Stapleton Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.