
AI will soon be able to audit all published research
Empower your mind, elevate your skills
Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record.Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive.Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing.This has opened the doors for exploitation. Opportunistic "paper mills" sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees.Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products.These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures.Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws.Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy.In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghost-written by Monsanto employees and published in a journal with ties to the tobacco industry.Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide.When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This "science is broken" narrative undermines public trust.Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.Natural language processing tools flag "tortured phrases" - the tell-tale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction.AI - especially agentic, reasoning-capable models increasingly proficient in mathematics and logic - will soon uncover more subtle flaws.For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors.We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely cited.To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments.What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable.As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end.Safeguarding public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public engagement.If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science.Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings - or better still, takes the lead - the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself.Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
a day ago
- Time of India
Wikipedia loses UK court battle over Online Safety Act: Report
Wikimedia Foundation, the operator of Wikipedia, has lost a legal challenge against the UK's Online Safety Act , reports news agency Reuters. The act sets strict rules for online platforms, inviting criticism for allegedly limiting free speech. As per the report, the Wikimedia Foundation appealed at London's High Court, arguing that the rules could put Wikipedia in the toughest category of regulation, known as Category 1. This category would require verifying the identities of Wikipedia's users and contributors, something the foundation said would force it to limit access for many UK users. Judge rejects Wikipedia's case Reuters says that Judge Jeremy Johnson rejected the case but said the Wikimedia Foundation could challenge it again if UK regulator Ofcom wrongly decides Wikipedia is a Category 1 service. He also said his decision was not permission for Ofcom or the UK government to create rules that would seriously harm Wikipedia's operations. The decision "does not give Ofcom and the Secretary of State a green light to implement a regime that would significantly impede Wikipedia's operations," the judge stated. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like This new air conditioner cools down a room in just seconds News of the Discovery Undo The Wikimedia Foundation said it was disappointed the ruling did not give immediate protection for Wikipedia but welcomed the judge's comments stressing Ofcom's and the government's duty to protect the site. The UK's Department for Science, Innovation and Technology called the ruling a step forward in making the internet safer. Ofcom said it would continue its work on how services are categorised. What is Online Safety Act The Online Safety Act was passed in 2023. It is being rolled out this year. The law aims to make the internet safer — especially for children — by setting clear rules for online platforms, social media sites, and other digital services. It mandates online platforms to take steps to stop children from seeing things like sexual content, extreme violence, or content encouraging self-harm. However, the act has been criticised by Elon Musk's X (formerly Twitter) and free speech groups, who say the rules are too broad and risk removing legal content. The government says the law is meant to protect children and remove illegal material, with Technology Secretary Peter Kyle arguing that opponents of the law are 'on the side of predators.' Lava Blaze Dragon 5G: Clean, Simple and Surprising AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
2 days ago
- Mint
India to focus on voice-first vernacular LLMs: AI Mission CEO
New Delhi: India's $1.2-billion AI Mission is preparing to make voice-first artificial intelligence (AI) models for Indian languages its calling card, aiming to differentiate itself from Big Tech firms that focus primarily on text-based AI. The differentiator for artificial intelligence from India could include Indic languages-focused large language models, alongside voice-first models. In an interview with Mint, Abhishek Singh, additional secretary at the Ministry of Electronics and IT (Meity) and chief executive of the Centre's India AI Mission, said that the country's push to build its own foundational AI models will seek to create solutions that can be replicated in other nations. 'India's common compute infrastructure is a unique model that is receiving a lot of interest from around the world, especially the global south," Singh said. 'While US tech firms are largely building foundational models based on text data as the primary medium, as India, we see a sizeable opportunity to develop foundational models that are primarily trained on voice data, because in future, voice will be the primary and most natural way in which people will interact with AI." The AI Mission, announced in March last year, seeks to offer funding support in the form of access to graphic processing unit (GPU) chips to startups. So far, four startups—Gan, Gnani, Sarvam and Soket—have been approved to build foundational AI models by Meity under the Mission. A total of over 34,000 GPUs, which are fundamental resources used to train AI algorithms on billions of parameters of data, have also been procured by Meity through cloud and data centre providers, which include the likes of Jio,Tata Communications and Yotta. Since its announcement, the India AI Mission has been allocated close to $250 million over the previous and current fiscals. Meity does not disclose the exact budgetary utilization figure. Leaning on language data Singh, however, added that more than procuring GPUs and bringing down the cost of compute for startups building AI models, India's biggest efforts are being pooled into procuring public datasets on Indian languages. 'All foundational models are trained on publicly available datasets. If you look at Wikipedia, which is one of the largest sources of open data used by AI startups, there are millions of pages of data in English—but the number of pages of data in Hindi would be around a few hundred thousand. The amount of data available in other Indian languages, such as Bangla or Assamese, is even less. This is why Meity, through its Bhashini programme, conducted a datasets sourcing drive to procure language data from our own sources," he said. Bhashini, to be sure, maintains a database of 22 Indian languages, which the AI Mission is offering to help startups build LLMs based on these languages. Singh's emphasis is already being reflected in early examples—on 8 May, Sarvam, the first startup approved by the AI Mission, introduced a text-to-speech LLM with native support for 11 Indian languages. 'In the long run, voice-first Indian language LLMs can come into application in public services—such as for a farmer who can use a public utility to reduce his irrigation costs, for access to healthcare in remote districts, and for improving education in fringe sectors. It is this that will be the biggest impact of AI in India," Singh said. While it is too early to test most of the models, Sarvam-M, the first of the Indic language-native AI models born in India, claimed to offer 20% superior performance in vernacular languages over foreign models. However, Sarvam-M is not a foundational model—but is a 24-billion-parameter AI model trained on French startup Mistral's foundational models. Public service apps Industry stakeholders, too, concurred. 'For private corporates, we see a lot of investments going in with a commercial business outcome-linked approach," said Saibal Chakraborty, managing director and senior partner for India at management consultancy firm, Boston Consulting Group (BCG). 'While this only develops the upper layer of AI applications, the need of the hour is for startups to work on public service applications in AI. Not everyone needs to work on foundational models, or try to reinvent the wheel—but public utilities will likely emerge as the biggest disruptor in the long run." The efficacy of the Mission has also been questioned recently, with large, global tech firms such as Google and OpenAI adding support for Indian languages in their latest foundational models. Kashyap Kompella, veteran AI analyst and consultant, said that there would be room for both Big Tech firms and Indian startups to coexist despite targeting a similar base. 'Enterprises are more likely to trust AI models offered by Big Tech, since they have stronger policies and safety safeguards. However, Indian firms have ample scope to disrupt critical sectors. The work on local AI models and investments should continue with a long-term focus, if the ultimate goal is not to be dependent solely on AI built outside the country," Kompella added. Singh reaffirmed this long-term focus, adding that the Mission will seek to create an impact for the future. 'By end-2026, we hope to see the India AI Mission give birth to five foundational LLMs, and up to 100 AI applications catering to public utilities in key sectors such as agriculture, education and healthcare. The government's role is to be an enabler for private firms to take on such work," he said.


Time of India
4 days ago
- Time of India
Fascinating facts about the heaviest snakes on earth
Source: Wikipedia Snakes are among the most mysterious and awe-inspiring creatures on the planet. While many people are familiar with venomous snakes or constrictors, few realize just how massive some species can get. The heaviest snakes on Earth are not only record-breakers in terms of size but also important players in their ecosystems. From the Amazonian anaconda to ancient giants like Titanoboa, these snakes push the boundaries of what's biologically possible. Their immense size, strength, and stealth make them some of the most formidable predators in the animal kingdom. Heaviest snakes on Earth: Top giants from Anaconda to Titanoboa 1. Green Anaconda (Eunectes murinus) – The Heaviest Living Snake Source: Wikipedia Average Length: 4 to 6 meters (13–20 feet) The green anaconda holds the title of the heaviest living snake on Earth, thanks to its incredibly muscular body and aquatic lifestyle. While pythons may grow slightly longer, no snake matches the green anaconda in sheer mass. These non-venomous constrictors feed on large prey such as capybaras, deer, and even caimans. They are most active at night and spend much of their lives submerged in swamps, rivers, and wetlands. Female green anacondas are significantly larger than males, sometimes by double the size. 2. Northern Green Anaconda (Eunectes akayima) – Newly Discovered Giant Length: Up to 7.5 meters (24.6 feet) In a major biological breakthrough, scientists discovered a new species of anaconda in Ecuador—called the Northern Green Anaconda (Eunectes akayima). by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Do you have a mouse? Desert Order Undo It is genetically distinct from the common green anaconda and weighs significantly more, with some specimens recorded at 500 kilograms. 3. Burmese Python (Python bivittatus) Average Length: 5 to 6 meters (16–20 feet) The Burmese python is one of the largest snakes in the world by length and weight. Native to Southeast Asia, it has become an invasive species in the United States, particularly in Florida. They can consume a wide range of prey, including birds, mammals, and even alligators. 4. Reticulated Python (Malayopython reticulatus) Max Recorded Length: 10.0 meters (32.8 feet) While not as heavy as the anaconda, the reticulated python is considered the longest snake in the world. It's slender compared to its South American counterparts but incredibly powerful and capable of taking down large mammals. Despite its massive size, the reticulated python is an excellent climber and swimmer. 5. African Rock Python (Python sebae) Length: Up to 6.5 meters (21 feet) Africa's largest snake, the African rock python, is known for its aggressive nature and powerful constriction. It preys on monkeys, antelope, and sometimes domestic animals. Though not venomous, it can be dangerous due to its size and strength. There have been rare reports of African rock pythons attacking humans, though these are extremely uncommon. 6. Titanoboa (Titanoboa cerrejonensis) – The Largest Snake in History Length: Approx. 13 meters (42 feet) The Titanoboa is an extinct snake species and the largest snake ever discovered. Fossils found in Colombia show that it lived in ancient tropical rainforests after the extinction of dinosaurs. Its massive size suggests it fed on giant crocodilian ancestors and other large reptiles. Titanoboa's body was so thick that it's believed to have been almost 1 meter (3.3 feet) wide. Also read | 10 snakes that can swim or fly naturally in the wild