Latest news with #PubPeer

Hindustan Times
07-08-2025
- Science
- Hindustan Times
Fraudulent scientific papers are booming
SCIENTIFIC JOURNALS exist to do one thing: provide accurate, peer-reviewed reports of new research to an interested audience. But according to a paper published in PNAS on August 4th, that lofty goal is badly compromised. Scientific fraud, its authors conclude, happens on a massive scale and is growing quickly. In fact, though the number of scientific articles doubles every 15 years or so, the number thought to be fraudulent is doubling every 1.5 years (see chart). It has long been clear that publication fraud rarely comes from lone fraudsters. Instead, companies known as paper mills prepare fake scientific papers full of made-up experiments and bogus data, often with the help of artificial-intelligence (AI) models, and sell authorship to academics looking to boost their publication numbers. But the analysis conducted by Dr Amaral and his colleagues suggests that some journal editors may be knowingly waving these papers through. Their article suggests that a subset of journal editors are responsible for the majority of questionable papers their publications produce. To arrive at their conclusion, the authors looked at papers published by PLOS ONE , an enormous and generally well-regarded journal that identifies which of their 18,329 editors is responsible for each paper. (Most editors are academics who agree to oversee peer review alongside their research.) Since 2006 the journal has published 276,956 articles, 702 of which have been retracted and 2,241 of which have received comments on PubPeer, a site that allows other academics and online sleuths to raise concerns. When the team crunched the data, they found 45 editors who facilitated the acceptance of retracted or flagged articles much more frequently than would be expected by chance. Although they were responsible for the peer-review process of only 1.3% of PLOS ONE submissions, they were responsible for 30.2% of retracted articles. The data suggested yet more worrying patterns. For one thing, more than half of these editors were themselves authors of papers later retracted by PLOS ONE . What's more, when they submitted their own papers to the journal, they regularly suggested each other as editors. Although papers can be retracted for many causes, including honest mistakes, Dr Amaral believes these patterns indicate a network of editors co-operating to bypass the journal's usual standards. Dr Amaral does not name the editors in his article, butNature, a science magazine, subsequently made use of his analysis to track down five of the relevant editors. PLOS ONE says that all five were investigated and dismissed between 2020 and 2022. Those who responded toNature's enquiries denied wrongdoing. Compelling as Dr Amaral's analysis is, it does not conclusively prove dishonest behaviour. All the same, the findings add to a growing body of evidence suggesting some editors play an active role in the publication of substandard research. An investigation in 2024 by RetractionWatch, an organisation that monitors retracted papers, and Science, another magazine, found that paper mills have bribed editors in the past. Editors might also use their powers to further their own academic careers. Sleuths on PubPeer have flagged papers in several journals which seem to be co-written by either the editor overseeing the peer review or one of their close collaborators—a clear conflict of interest. Detecting networks of editors the way Dr Amaral's team has 'is completely new', says Alberto Ruano Raviña of the University of Santiago de Compostela in Spain, who researches scientific fraud and was not involved with the study. He is particularly worried about fake papers remaining part of the scientific record in medical fields, where their spurious findings might be used to conduct reviews that inform clinical guidelines. A recent paper in the BMJ , a medical journal, found that 8-16% of the conclusions in systematic reviews that included later-retracted evidence ended up being wrong. 'This is a real problem,' says Dr Ruano Raviña. Yet the incentives for fraud continue to outweigh the consequences. Measures including a researcher's number of publications and citations have become powerful proxies for academic achievement, and are seen as necessary for building a career. 'We have become focused on numbers,' says Dr Amaral. This is sometimes made explicit: staff at Indian medical collegesare required to publisha certain number of papersin order to progress. Some journals, for their part, make more money the more articles they accept. Breaking either trend will take time. In the meantime, publishers are rolling out new screening tools for suspicious content, including some which spot 'tortured phrases'—nonsensical plagiarism-evading paraphrases generated by AI models such as 'colossal information' instead of 'big data'—or citations in the wrong places. There is also increasing pressure on publishers to root out bad papers. Databases of reputable journals, such as Scopus or Web of Science, can 'de-list' journals, ruining their reputations. It's up to the publishers to bring about a relisting, which means tidying up the journal. 'If we see untrustworthy content that you're not retracting, you're not getting back in,' says Nandita Quaderi, editor-in-chief of Web of Science. But whether publishers and the many editors who work hard to keep bad science out of their journals can keep up with the paper mills remains to be seen.
Business Times
06-06-2025
- Politics
- Business Times
This isn't how you ‘restore gold standard' science
IN another attempt to concentrate power, President Donald Trump has signed an executive order to 'restore gold standard science' in federal research and policy. It sounds reasonable given the instances of bad or faked science being published, including high-profile papers on Alzheimer's drug development and one misleadingly claiming that hydroxychloroquine would cure Covid-19. In the last decade, scientists themselves have grown concerned about the large number of studies whose promising results couldn't be replicated. However, researchers dedicated to reforming their field say the president's plan isn't a solution. It's a way to give government officials the power to reject evidence they disagree with – without any accountability or transparency. There is already a long history of US policies that ignored scientific evidence, from allowing toxic lead in petrol to decades of failing to act on the known dangers of asbestos and cigarettes. Science alone can't decide policy, but the public and lawmakers need reliable scientific data to decide, for example, which pesticides or food additives to ban, or how to regulate genetically modified crops. Trump's order cites as a flaw in the system the prolonged school closures during the pandemic. Many US schools stayed closed long after those in most European countries had reopened. However, the US policy decision had little to do with science – shoddy or otherwise. It was more about a clash of values and political polarisation, along with a lack of balanced, evidence-based public discussion. He also criticises the National Marine Fisheries Service for basing restrictions on Maine's lobster fishing industry on a worst-case scenario aimed at protecting the endangered right whale. But the public might benefit from knowing such scenarios – unless their likelihood is being exaggerated. Ultimately, the decision comes down to values: Americans might want to act on even a small chance that an industry could drive a species to extinction. The language in the executive order is nearly identical to that used by scientists already working to improve research standards, including reproducibility, communication about errors and uncertainty, and scepticism about assumptions. In recent years, fields with replication problems have made progress towards those goals by requiring more transparency in reporting data and statistical methods. Peers uncovered fraud in the research of Harvard Professor Francesca Gino, who was fired from her tenured position last month. Journals and scientific societies are requiring more disclosure about potential conflicts of interest, and scientists are using a platform called PubPeer to criticise published work, which can lead to corrections and retractions. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up But the president's directive isn't really aimed at improving science. 'The executive order converts principles of good practice into weapons against scientific evidence,' said psychologist Brian Nosek, co-founder of the Centre for Open Science. Deciding what's credible should be a decentralised process, Nosek said, with many people and lines of evidence being presented and different parties challenging each other. He and other experts in science research reform say that even good studies aren't perfect. There's widespread concern the executive order could allow government officials to flag almost anything as not up to their definition of 'gold standard'. Sometimes the best we have are observational studies or models. Nutrition is notoriously hard to study with reproducible experiments, but we still have to decide what to put in school lunches. And there is no default precautionary position where you wait for perfect evidence; inaction can kill people, too. The executive order comes amid drastic federal funding cuts to the National Science Foundation and similar institutions. It's not surprising that many scientists see the order not as a way to improve scientific standards, but as the latest offensive in a war on science. The document begins by blasting the Centres for Disease Control and Prevention (CDC) for discouraging in-person learning during the pandemic even though 'the best available scientific evidence showed children were unlikely to transmit or suffer serious illness or death from the virus'. On the surface, this is backed up by reporting from The New York Times, citing data showing prolonged school closures didn't significantly decrease Covid-19 mortality, and also set many kids back in their education. In his book, An Abundance of Caution, journalist David Zweig makes a case that the relevant scientific data were available in the spring and summer of 2020, and by May many European schools were up and running with no uptick in casualties. In my own reporting back in summer of 2020, I found the problem was more bottom-up than top-down. The data couldn't reassure people that there was zero risk, and some worried that any danger of severe infection was unacceptable – for students or teachers. By summer 2020, the CDC had acknowledged the benefits of in-person education, but the American public was struggling to have a rational debate. It was more a matter of moral outrage over our different values than any disagreement over science. Many factors fed some regrettable policy choices, including social media algorithms that drowned out reasoned fact-based discussion with misinformation and mudslinging. What we didn't need then was more centralised control of science – and it's the last thing we need now. BLOOMBERG