logo
#

Latest news with #RetractionWatch

Fraudulent scientific papers are booming
Fraudulent scientific papers are booming

Hindustan Times

time4 days ago

  • Science
  • Hindustan Times

Fraudulent scientific papers are booming

SCIENTIFIC JOURNALS exist to do one thing: provide accurate, peer-reviewed reports of new research to an interested audience. But according to a paper published in PNAS on August 4th, that lofty goal is badly compromised. Scientific fraud, its authors conclude, happens on a massive scale and is growing quickly. In fact, though the number of scientific articles doubles every 15 years or so, the number thought to be fraudulent is doubling every 1.5 years (see chart). It has long been clear that publication fraud rarely comes from lone fraudsters. Instead, companies known as paper mills prepare fake scientific papers full of made-up experiments and bogus data, often with the help of artificial-intelligence (AI) models, and sell authorship to academics looking to boost their publication numbers. But the analysis conducted by Dr Amaral and his colleagues suggests that some journal editors may be knowingly waving these papers through. Their article suggests that a subset of journal editors are responsible for the majority of questionable papers their publications produce. To arrive at their conclusion, the authors looked at papers published by PLOS ONE , an enormous and generally well-regarded journal that identifies which of their 18,329 editors is responsible for each paper. (Most editors are academics who agree to oversee peer review alongside their research.) Since 2006 the journal has published 276,956 articles, 702 of which have been retracted and 2,241 of which have received comments on PubPeer, a site that allows other academics and online sleuths to raise concerns. When the team crunched the data, they found 45 editors who facilitated the acceptance of retracted or flagged articles much more frequently than would be expected by chance. Although they were responsible for the peer-review process of only 1.3% of PLOS ONE submissions, they were responsible for 30.2% of retracted articles. The data suggested yet more worrying patterns. For one thing, more than half of these editors were themselves authors of papers later retracted by PLOS ONE . What's more, when they submitted their own papers to the journal, they regularly suggested each other as editors. Although papers can be retracted for many causes, including honest mistakes, Dr Amaral believes these patterns indicate a network of editors co-operating to bypass the journal's usual standards. Dr Amaral does not name the editors in his article, butNature, a science magazine, subsequently made use of his analysis to track down five of the relevant editors. PLOS ONE says that all five were investigated and dismissed between 2020 and 2022. Those who responded toNature's enquiries denied wrongdoing. Compelling as Dr Amaral's analysis is, it does not conclusively prove dishonest behaviour. All the same, the findings add to a growing body of evidence suggesting some editors play an active role in the publication of substandard research. An investigation in 2024 by RetractionWatch, an organisation that monitors retracted papers, and Science, another magazine, found that paper mills have bribed editors in the past. Editors might also use their powers to further their own academic careers. Sleuths on PubPeer have flagged papers in several journals which seem to be co-written by either the editor overseeing the peer review or one of their close collaborators—a clear conflict of interest. Detecting networks of editors the way Dr Amaral's team has 'is completely new', says Alberto Ruano Raviña of the University of Santiago de Compostela in Spain, who researches scientific fraud and was not involved with the study. He is particularly worried about fake papers remaining part of the scientific record in medical fields, where their spurious findings might be used to conduct reviews that inform clinical guidelines. A recent paper in the BMJ , a medical journal, found that 8-16% of the conclusions in systematic reviews that included later-retracted evidence ended up being wrong. 'This is a real problem,' says Dr Ruano Raviña. Yet the incentives for fraud continue to outweigh the consequences. Measures including a researcher's number of publications and citations have become powerful proxies for academic achievement, and are seen as necessary for building a career. 'We have become focused on numbers,' says Dr Amaral. This is sometimes made explicit: staff at Indian medical collegesare required to publisha certain number of papersin order to progress. Some journals, for their part, make more money the more articles they accept. Breaking either trend will take time. In the meantime, publishers are rolling out new screening tools for suspicious content, including some which spot 'tortured phrases'—nonsensical plagiarism-evading paraphrases generated by AI models such as 'colossal information' instead of 'big data'—or citations in the wrong places. There is also increasing pressure on publishers to root out bad papers. Databases of reputable journals, such as Scopus or Web of Science, can 'de-list' journals, ruining their reputations. It's up to the publishers to bring about a relisting, which means tidying up the journal. 'If we see untrustworthy content that you're not retracting, you're not getting back in,' says Nandita Quaderi, editor-in-chief of Web of Science. But whether publishers and the many editors who work hard to keep bad science out of their journals can keep up with the paper mills remains to be seen.

As India's retractions surge, NIRF rankings only now begin penalising tainted research
As India's retractions surge, NIRF rankings only now begin penalising tainted research

The Hindu

time5 days ago

  • Science
  • The Hindu

As India's retractions surge, NIRF rankings only now begin penalising tainted research

Between 2004 and 2020, five research papers published by Zillur Rahman on various management topics, such as corporate social responsibility, self-service banking technologies, and service delivery options, were retracted. Yet, he served as the dean and professor of management studies at IIT Roorkee till May 2025. As per data from the Retraction Watch – a non‑profit scientific watchdog that reports retractions of academic papers from across the globe – the former dean's papers were retracted for various reasons, such as plagiarism, duplication, and concerns about data. 'When I reported about Mr. Rahman's retractions to IIT Roorkee on their LinkedIn page, they asked me to provide the list of retractions, and I did. Months later, when I followed up, they asked me to reach out via email. I did not pursue the matter further,' said Achal Agarwal, founder of India Research Watchdog, a not-for-profit that flags research misconduct in Indian academia. The Hindu reached out to Mr. Rahman and the IIT Roorkee's management. There has been no response. India is ranked third with the most number of retractions, only behind the U.S. and China, as per data from Post Pub, a platform that helps visualise country-wise statistics of retractions. Post-Pub data shows that India had a retraction rate (number of retractions for every 1,000 papers submitted) of 1.5 in 2012, which increased steeply to 3.5 in 2022. 'The U.S. has a very large science budget, so it's expected to see a higher number of retractions. China is aiming to become the global leader in research, which makes publishing a top priority. In India, the competition is intense, especially among Ph.D. aspirants and Master's students aiming for doctoral programs,' said John (name changed on request), a sleuth who flags frauds on Twitter. The need for legislation Ever since research papers became a parameter in the National Institutional Ranking Framework (NIRF), private universities have been churning out more research papers — albeit with a focus on quality. That's exactly why there are more papers and more retractions from private universities in the past ten years, as per data from Cornell University. The problem is twofold: the lack of stringent laws to curb scientific corruption and protect sleuths, and the negligence of educational institutions that foster impunity among researchers. In India, in the absence of legislation, the onus to prevent fraud is only on the institutions. However, sleuths say that most universities are mum – firing an academician over research fraud just doesn't happen in India, just as in other countries. 'As long as there is no good legislation to actually sue some of these frauds, nothing will happen. The legislation should have the norms to sue the frauds not just for the fake paper, but also for using government funding to create such a paper. It is a waste of public funding,' said John. The U.K. Research Integrity Office (UKRIO), established in 2006, is an independent charitable body that offers expert advice on research integrity and provides templates for misconduct investigations. Denmark's Act on Research Misconduct, enacted in 2017, assigns severe cases related to fabrication, falsification, and plagiarism to the Danish Board on Research Misconduct for investigation. 'In India, there is a need to have an autonomous, empowered body to look into the complaints. Currently, the complaints go to the respective body governing the institute, such as the Department of Science and Technology or the University Grants Commission (UGC). They don't take these complaints seriously,' said Ms. Agarwal. India Research Watch receives around 10 messages every day from whistleblowers across the country. The nuances of cheating Every retracted paper doesn't indicate fraud; sometimes researchers identify unintentional mistakes such as calculation errors, experimental flaws, or inaccurate data analysis, and withdraw the papers themselves. With the rise of sleuths and watchdog organisations, academicians have grown savvier, learning how to cover their tracks and evade detection. 'It is very difficult to catch smart frauds – those who don't blindly use Artificial Intelligence texts and those who don't just copy-paste texts and images,' Mr. John said. 'We check for tortured phrases, image overlaps, image fakery, statistically improbable data, and methodologies in an academic paper to find out its authenticity. However, the smart frauds have evolved – they plug all these gaps to get better with fraud,' Mr. John added. As it has been established that not all universities punish researchers with retractions, the ball seems to be in the court of publishers. Publishers such as Frontiers rely on AI to check the research papers, but a statement from the publisher says there have been cases of fraud even after the deployment of AI. Frontier's Artificial Intelligence Review Assistant (AIRA) was launched in 2018 and now includes over 50 verifications of submitted manuscripts. On July 29, the communications team of Frontier put out a notice that said, 'Frontier's Research Integrity Auditing team has uncovered a network of authors and editors who conducted peer review with undisclosed conflicts of interest and who have engaged in citation manipulation. The unethical actions of this network have been confirmed in 122 articles published in Frontiers, across five journals, and have led to their retraction.' Beyond plagiarism Ever since the colonial era, it has been mandatory for Indian researchers to send their theses to two foreign evaluators — a practice that began when British academics were the default choice. 'Rather than sending the papers to reputed universities in countries such as Germany, Australia, or the U.S., the lower-quality ones are often sent to universities in Malaysia or Thailand,' said Prof. V. Ramachandran of Anna University. He pointed to another nuance that is common in India's academic system — a nexus between guides, students, and examiners. 'The guides propose a list of examiners to the university — a list that students are aware of. These examiners are often acquaintances of the guide or the student,' Mr. Ramachandran explained, suggesting that universities should independently constitute evaluation panels. 'The examiners must be random and unknown to the student or the guide, and they should be from well-established institutions,' he said. In private universities, academicians are often pressured to publish research papers without adequate funding support. 'At institutions such as mine, faculty are expected to begin research with just ₹1–2 lakh — and that's considered a luxury. In many private universities, researchers are made to start with zero funding,' said a professor from a private university in Tamil Nadu, speaking on condition of anonymity. 'Research output is a key metric in the NIRF rankings, and students look at these rankings while choosing colleges. It's all tied to a profit-making model,' the professor said. Besides fraud, another issue plaguing research is the rise in publications in dubious journals. 'High-standard journals follow tough peer review processes, demand original data and sound science, and are mostly read and cited by reputed researchers across the world. On the other hand, low-standard journals have become dumping grounds for unethical research. Since good scientists don't read these journals, the fraud often goes unnoticed — and most of it never even gets retracted,' the professor from VIT added. Beyond elite institutions, the crisis of research quality runs deeper in smaller universities and colleges, particularly those funded by state governments. 'In Tier 2 and Tier 3 institutions — especially state universities — the drop in research quality isn't linear, it's exponential. Many are publishing in predatory venues. This is far more common in State universities, and that's where serious streamlining is needed,' said a senior academic from IISC Bangalore, seeking anonymity. 'Instead of counting papers, we should assess them on the impact of their teaching.' A long way to go Starting this year, the NIRF will begin assigning negative scores to higher educational institutions for research papers that have been retracted in the past three calendar years, along with any citations those papers had accumulated. While experts see this as a welcome step, many believe the journey toward ensuring research integrity in India remains long. 'At BITS Pilani, we are setting up a Research Integrity Office to proactively educate and sensitise our research community,' said Professor V. Ramgopal Rao, Group Vice Chancellor of BITS Pilani. 'With over 500 new Ph.D. students joining us each year, we see it as our responsibility to train both faculty and students on best practices in research, responsible experimentation, and academic ethics.' Professor Rao, who has been consistently vocal on the need to tackle research fraud, has advocated for the creation of oversight mechanisms both at the institutional and national levels. 'The UGC is fundamentally a grants commission. It neither has the mandate nor the necessary structures to investigate or act on cases of research misconduct. Even if the UGC withholds funding, such activities may continue unchecked,' he observed. The Government of India has introduced a bill in Parliament to set up the Higher Education Commission of India (HECI), which will serve as a single regulator replacing bodies like the UGC and AICTE. Commenting on this, Prof. Rao said, 'The proposed HECI will have the authority to impose penalties on institutions and even recommend their closure in extreme cases. However, since education is a concurrent subject under the Constitution, the Centre cannot act unilaterally. Cooperation from State governments is essential, and that makes the road to implementation long and uncertain.' Drawing a comparison with global practices, he added, 'In the U.S. and Europe, research fraud is treated with the seriousness it deserves. Academicians found guilty can lose their jobs. In India, unfortunately, we have seen cases where even vice-chancellors have been implicated in academic misconduct. When leadership itself is compromised, enforcing standards across the system becomes a much bigger challenge.' (Laasya is an Independent Journalist with bylines published in BBC, Thomson Reuters and Mongabay India among a dozen others. One day she is tracking climate finance; the next, she's decoding education reforms, dissecting caste realities or tracing wildlife in forgotten forests.)

AI will soon be able to audit all published research
AI will soon be able to audit all published research

Time of India

time27-07-2025

  • Science
  • Time of India

AI will soon be able to audit all published research

Academy Empower your mind, elevate your skills Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit has opened the doors for exploitation. Opportunistic "paper mills" sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghost-written by Monsanto employees and published in a journal with ties to the tobacco after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This "science is broken" narrative undermines public recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI language processing tools flag "tortured phrases" - the tell-tale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or - especially agentic, reasoning-capable models increasingly proficient in mathematics and logic - will soon uncover more subtle example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in can already anticipate what it will reveal. If the scientific community prepares for the findings - or better still, takes the lead - the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.

AI will soon be able to audit all published research — what will that mean for public trust in science?
AI will soon be able to audit all published research — what will that mean for public trust in science?

New Indian Express

time26-07-2025

  • Science
  • New Indian Express

AI will soon be able to audit all published research — what will that mean for public trust in science?

Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record. Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive. Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science? Peer review isn't catching everything In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing. This has opened the doors for exploitation. Opportunistic 'paper mills' sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees. Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products. These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures. Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws. Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy. In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry. Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide. When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This 'science is broken' narrative undermines public trust.

AI will soon be able to audit all published research – what will that mean for public trust in science?
AI will soon be able to audit all published research – what will that mean for public trust in science?

Mint

time26-07-2025

  • Science
  • Mint

AI will soon be able to audit all published research – what will that mean for public trust in science?

Wellington and Naomi Oreskes, Harvard University Wellington/Cambridge, Jul 26 (The Conversation) Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record. Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive. Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science? Peer review isn't catching everything In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing. This has opened the doors for exploitation. Opportunistic 'paper mills' sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees. Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products. These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures. Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws. Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy. In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry. Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide. When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This 'science is broken' narrative undermines public trust. AI is already helping police the literature Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation. Natural language processing tools flag 'tortured phrases' – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction. AI – especially agentic, reasoning-capable models increasingly proficient in mathematics and logic – will soon uncover more subtle flaws. For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text. Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors. We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely cited. To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments. What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable. As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end. Reframing the scientific ideal Safeguarding public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public engagement. If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless. A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs. A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science. Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings – or better still, takes the lead – the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself. Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken. (The Conversation) NSA

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store