5 days ago
Why a hybrid AI-human approach is necessary to uphold research integrity
The scientific research ecosystem is expanding rapidly. According to a June 2023 article in WordsRated, over five million scholarly articles were published annually as of 2022. The number continues to rise driven by the global increase in research institutions and journals and, more critically, by the 'publish or perish' culture that compels researchers to produce and publish findings quickly.
Publication pressure
Rising misconduct at the institutional level is closely tied to shifting academic priorities. Many institutions now focus heavily on increasing publication output, citation counts, and rankings, often at the expense of quality and ethical standards. Studies show that publication pressure is a strong predictor of research misconduct. Early-career academics, in particular, may resort to unethical behaviour under institutional pressure for high output. This overemphasis on metrics can undermine meaningful scientific contributions and compromise the integrity of research.
Traditional peer review, long regarded as the gold standard for quality control, is struggling to keep pace with the volume and sophistication of modern misconduct. Editorial staff and peer reviewers, often overburdened and under-resourced, may lack the tools or time to detect refined forms of fraud or manipulation. This challenge is compounded by the rise of paper mills or organisations that mass-produce fake research for profit. These groups exploit vulnerabilities in the publishing system, flooding journals with fabricated data, manipulated images, and illegitimate manuscripts.
Generative AI adds further complexity. While AI can support legitimate research, it also enables the creation of highly sophisticated fake studies, making detection even more difficult. As a result, even top academic institutions are not immune to the growing wave of retractions and scandals.
In this high-pressure academic environment, retractions are increasing at an alarming rate. In 2023, India recorded 2,737 retractions, the third-highest total globally after China and the U.S. When adjusted for publication volume, India also ranks among the top five countries by retraction rate, highlighting the scale of the problem. According to a November 2023 article in The Hindu, between 2020 and 2022, retractions in India increased 2.5 times compared to 2017–2019, driven by plagiarism, data fabrication, and fake peer review. In 2024, several Indian institutions came under scrutiny, resulting in multiple retractions.
Hybrid system
To detect fraud at scale, AI-powered tools are being adopted across the research ecosystem. These technologies can flag plagiarism, image manipulation, and potential paper-mill activity, providing real-time analysis that helps institutions catch compromised research early. AI's capacity to process vast datasets has significantly enhanced detection capabilities. However, AI cannot fully interpret context, nuance, or intent. Algorithms may flag false positives, miss subtle fraud, or struggle with complex scientific content. Hence, a hybrid approach, combining AI with human oversight, is emerging as the most effective solution.
A hybrid system that blends AI with human judgement offers the best defence against research fraud. AI excels at scanning large volumes of data and flagging potential issues. However, the true value of this system is realised when AI-generated alerts are assessed by human experts. Humans bring critical thinking, contextual insight, and interpret complex or ambiguous cases. This partnership ensures that AI-generated warnings are carefully evaluated, reducing the risk of errors and adapting to new patterns of misconduct as they arise. Human reviewers also provide a safeguard against algorithmic bias, ensuring systems evolve with changing fraud tactics. This synergy between machine efficiency and human expertise builds a more resilient, adaptive framework for maintaining research integrity.
It is critical for institutions and funding agencies to invest in ethics education and set clear guidelines for responsible research. Early-career researchers and Ph.D. students should receive training in transparency and accountability. Publishers, too, must invest in both advanced technologies and skilled editorial teams to ensure that only authentic research is published. As scientific publishing continues to grow, safeguarding research integrity requires greater vigilance and innovation.
A hybrid AI-human approach allows institutions to address misconduct more effectively, overcoming the limitations of AI alone. Ultimately, the credibility of science depends on our shared commitment to transparency, authenticity, and ethical conduct at every stage of research.
The writer is Group CTO and EVP, Products and AI, Cactus Communications.