logo
'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies

'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies

IN A NUTSHELL 🔍 Investigations by Nikkei Asia and Nature reveal hidden prompts in studies aiming to manipulate AI review systems.
and reveal hidden prompts in studies aiming to manipulate AI review systems. 🌐 Approximately 32 studies from 44 institutions worldwide were identified with these unethical practices, causing significant concern.
⚠️ The over-reliance on AI in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny.
in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. 🔗 Experts call for comprehensive guidelines on AI use to ensure research integrity and prevent manipulative practices.
The world of scientific research is facing a new, controversial challenge: the use of hidden prompts within scholarly studies intended to manipulate AI-driven review systems. This revelation has sparked significant debate within the academic community, as it sheds light on potential ethical breaches and the evolving role of technology in research validation. As scientists grapple with these issues, it is crucial to understand the implications of these practices on the trustworthiness of scientific findings and the integrity of academic publications. Hidden Messages in Studies: A Startling Discovery
Recent investigations by Nikkei Asia and Nature have uncovered instances of hidden messages within academic studies. These messages, often concealed in barely visible fonts or written in white text on white backgrounds, are not meant for human reviewers but target AI systems like Large Language Models (LLMs) to influence their evaluations. Such practices have raised alarms, as they attempt to secure only positive assessments for research submissions.
Approximately 32 studies have been identified with these manipulative prompts. These studies originated from 44 institutions across 11 countries, highlighting the global reach of this issue. The revelation has prompted the removal of these studies from preprint servers to maintain the integrity of the scientific process. The use of AI in peer review, intended to streamline the evaluation process, is now under scrutiny for its potential misuse and ethical implications.
'$100 Million Vanished and Nothing Flew': DARPA's Canceled Liberty Lifter Seaplane Leaves Behind a Trail of Broken Dreams and Game-Changing Tech The Broader Implications of AI in Peer Review
The discovery of hidden prompts in studies not only exposes unethical practices but also raises questions about the reliance on AI for peer review. While AI can assist in managing the growing volume of research, it appears that some reviewers may be over-relying on these systems, bypassing traditional scrutiny. Institutions like the Korea Advanced Institute of Science and Technology (KAIST) prohibit AI use in review processes, yet the practice persists in some quarters.
Critics argue that these hidden prompts are symptomatic of systemic problems within academic publishing, where the pressure to publish can outweigh ethical considerations. The use of AI should be carefully regulated to prevent such manipulations, ensuring that peer review remains a rigorous and trustworthy process. As the academic community grapples with these challenges, it becomes evident that adherence to ethical standards is crucial in maintaining the credibility of scientific research.
'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions The Ethical Imperative: Why Science Must Avoid Deception
Science is fundamentally built on trust and ethical integrity. From technological advancements to medical breakthroughs, the progress of society hinges on the reliability of scientific findings. However, the temptation to resort to unethical shortcuts, such as AI manipulation, poses a threat to this foundation. The scientific community must resist these temptations to preserve the credibility of their work.
The pressures facing researchers, including increased workloads and heightened scrutiny, may drive some to exploit AI. Yet, these pressures should not justify compromising ethical standards. As AI becomes more integrated into research, it is vital to establish clear regulations governing its use. This will ensure that science remains a bastion of truth and integrity, free from deceptive practices that could undermine public trust.
'They Cloned a Yak in the Himalayas!': Chinese Scientists Defy Nature with First-Ever Livestock Copy at 12,000 Feet Charting a Course Toward Responsible AI Use
The integration of AI into scientific processes demands careful consideration and responsible use. As highlighted by Hiroaki Sakuma, an AI expert, industries must develop comprehensive guidelines for AI application, particularly in research and peer review. Such guidelines will help navigate the ethical complexities of AI, ensuring it serves as a tool for advancement rather than manipulation.
While AI holds the potential to revolutionize research, its implementation must be guided by a commitment to ethical standards. The scientific community must engage in ongoing dialogue to address the challenges posed by AI, fostering a culture of transparency and accountability. Only through these measures can science continue to thrive as a pillar of progress, innovation, and truth.
As the intersection of AI and scientific research continues to evolve, how can the academic community ensure that technological advancements enhance rather than undermine the integrity of scientific inquiry?
This article is based on verified sources and supported by editorial technologies.
Did you like it? 4.5/5 (26)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'They're Putting It in Space to Dominate': China's Orbital Supercomputer Leaves Global Tech Powers Scrambling to Catch Up
'They're Putting It in Space to Dominate': China's Orbital Supercomputer Leaves Global Tech Powers Scrambling to Catch Up

Sustainability Times

timea day ago

  • Sustainability Times

'They're Putting It in Space to Dominate': China's Orbital Supercomputer Leaves Global Tech Powers Scrambling to Catch Up

IN A NUTSHELL 🚀 China has launched 12 satellites as part of the Three-Body Computing Constellation to perform space-based data processing. has launched 12 satellites as part of the to perform space-based data processing. 💡 The constellation aims to achieve a computing capacity of 1,000 peta operations per second , potentially surpassing terrestrial supercomputers. , potentially surpassing terrestrial supercomputers. 🌍 Equipped with tools for Earth observation and 3D terrain mapping , the satellites support diverse applications, including public safety and research. and , the satellites support diverse applications, including public safety and research. 🤝 China is open to international collaboration, aiming to develop a smart, interconnected orbital infrastructure accessible globally. Recent advancements in space technology are reshaping our understanding of data processing, with China at the forefront of this transformation. The nation has embarked on an ambitious project to establish a supercomputer network in space, leveraging a constellation of satellites to perform real-time, space-based data processing. This groundbreaking initiative, known as the Three-Body Computing Constellation, promises to revolutionize how we handle data by shifting processing power from Earth to the vast expanse of outer space. This leap towards orbital computing marks a significant milestone in the integration of artificial intelligence with space technology. The Launch of a Revolutionary Satellite Network On May 14, a major step was taken in realizing China's vision of orbital computing with the launch of 12 satellites from the Jiuquan Satellite Launch Center in the Gobi Desert. These satellites are the initial components of a planned fleet of 2,800, designed to enable space-based data processing. This project is a collaborative effort between ADA Space, a Chinese aerospace startup, and the state-backed Zhijiang Laboratory. The satellites are equipped with AI models comprising 8 billion parameters, capable of performing up to 5 peta operations per second (POPS). Once the full network is operational, it is expected to reach a staggering 1,000 POPS, potentially surpassing the capabilities of the most advanced terrestrial supercomputers. The initiative represents a shift in how data is processed by allowing for calculations to occur in space, thereby avoiding the delays associated with transmitting information back to Earth. The satellites are interconnected through laser communication links, transmitting data at speeds of up to 100 gigabits per second, and each satellite boasts a storage capacity of 30 terabytes. This interconnected mesh operates as a single distributed computer in low Earth orbit, paving the way for unprecedented computational efficiency. 'They Can Dodge Anything We Throw at Them': China's Secret Algorithm Outsmarts Even America's Most Advanced Hypersonic Defenses Scientific Payloads and Strategic Objectives Beyond its impressive computing power, the Three-Body Computing Constellation serves multiple scientific and strategic purposes. Each satellite is equipped with instruments for real-time Earth observation, 3D terrain mapping, and cosmic X-ray burst detection. These capabilities support diverse applications, from emergency response to research and commercial services. ADA Space envisions the generation of 'digital twins' of Earth's terrain, offering real-time insights for various industries, including agriculture and national defense. Moreover, one satellite features an X-ray polarization detector aimed at capturing high-energy cosmic events, such as gamma-ray bursts, marking a new venture into astrophysical research. This project aligns with China's broader goals of advancing industrial and economic development. Data generated from the constellation will feed into development zones like the Neijiang high-tech zone, fostering a local ecosystem centered on smart satellite data services. This integration of satellite technology with AI at scale underscores China's ambition to lead in space infrastructure and digital technology. 'They Finally Saw It!': James Webb Captures 14 Herculis c, the Mysterious Exoplanet Hidden in Darkness for Over Two Decades A Push for Global Collaboration While the Three-Body Computing Constellation is a distinctly national project, China has expressed openness to international collaboration. ADA Space, in particular, aims to work with international partners, especially countries in the Global South, to develop a 'smart, interconnected, low-carbon, globally accessible and sustainable' orbital infrastructure. This openness to global cooperation reflects a strategic approach to fostering international partnerships in space technology. During the Beyond Expo technology conference in Macau, Wang Jian emphasized the importance of space as a new frontier for AI development. The constellation's name, inspired by Liu Cixin's science fiction novel 'The Three-Body Problem,' alludes to the complexities of gravitational systems and interstellar cooperation. By leading in the development of orbital computing, China seeks to redefine how data is gathered, processed, and utilized in space, setting the stage for future advancements in the field. 'Unlocking a Cosmic Code': Alien Math Language Reveals Secrets of the Universe in This Groundbreaking 320-Page Report by Top Scientists Implications for the Future of Space-Based AI The deployment of the Three-Body Computing Constellation signals China's intent to lead in the emerging space-based AI race. Although other space agencies and private firms have tested limited orbital computing capabilities, China's project represents the first operational attempt to construct a computing array at such a large scale. This initiative not only showcases China's technological prowess but also highlights the country's strategic vision for the future of space-based data processing. In parallel, notable figures like former Google CEO Eric Schmidt have proposed launching large-scale data centers into orbit, recognizing the immense demand for computational power. With projections indicating that data centers could consume up to 96 gigawatts globally by 2030, the need for innovative solutions like orbital computing becomes increasingly apparent. As China continues to develop the Three-Body Computing Constellation, the world watches closely, anticipating the transformative impact of this pioneering project on the future of data processing and artificial intelligence. As the Three-Body Computing Constellation progresses, it raises intriguing questions about the future of space-based technology. Will this initiative set a new standard for how data is processed and utilized in space? How will international collaboration shape the development of this ambitious project, and what implications will it have for the global landscape of artificial intelligence? The answers to these questions will define the next era of technological innovation and exploration. This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (27)

'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies
'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies

Sustainability Times

time3 days ago

  • Sustainability Times

'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies

IN A NUTSHELL 🔍 Investigations by Nikkei Asia and Nature reveal hidden prompts in studies aiming to manipulate AI review systems. and reveal hidden prompts in studies aiming to manipulate AI review systems. 🌐 Approximately 32 studies from 44 institutions worldwide were identified with these unethical practices, causing significant concern. ⚠️ The over-reliance on AI in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. 🔗 Experts call for comprehensive guidelines on AI use to ensure research integrity and prevent manipulative practices. The world of scientific research is facing a new, controversial challenge: the use of hidden prompts within scholarly studies intended to manipulate AI-driven review systems. This revelation has sparked significant debate within the academic community, as it sheds light on potential ethical breaches and the evolving role of technology in research validation. As scientists grapple with these issues, it is crucial to understand the implications of these practices on the trustworthiness of scientific findings and the integrity of academic publications. Hidden Messages in Studies: A Startling Discovery Recent investigations by Nikkei Asia and Nature have uncovered instances of hidden messages within academic studies. These messages, often concealed in barely visible fonts or written in white text on white backgrounds, are not meant for human reviewers but target AI systems like Large Language Models (LLMs) to influence their evaluations. Such practices have raised alarms, as they attempt to secure only positive assessments for research submissions. Approximately 32 studies have been identified with these manipulative prompts. These studies originated from 44 institutions across 11 countries, highlighting the global reach of this issue. The revelation has prompted the removal of these studies from preprint servers to maintain the integrity of the scientific process. The use of AI in peer review, intended to streamline the evaluation process, is now under scrutiny for its potential misuse and ethical implications. '$100 Million Vanished and Nothing Flew': DARPA's Canceled Liberty Lifter Seaplane Leaves Behind a Trail of Broken Dreams and Game-Changing Tech The Broader Implications of AI in Peer Review The discovery of hidden prompts in studies not only exposes unethical practices but also raises questions about the reliance on AI for peer review. While AI can assist in managing the growing volume of research, it appears that some reviewers may be over-relying on these systems, bypassing traditional scrutiny. Institutions like the Korea Advanced Institute of Science and Technology (KAIST) prohibit AI use in review processes, yet the practice persists in some quarters. Critics argue that these hidden prompts are symptomatic of systemic problems within academic publishing, where the pressure to publish can outweigh ethical considerations. The use of AI should be carefully regulated to prevent such manipulations, ensuring that peer review remains a rigorous and trustworthy process. As the academic community grapples with these challenges, it becomes evident that adherence to ethical standards is crucial in maintaining the credibility of scientific research. 'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions The Ethical Imperative: Why Science Must Avoid Deception Science is fundamentally built on trust and ethical integrity. From technological advancements to medical breakthroughs, the progress of society hinges on the reliability of scientific findings. However, the temptation to resort to unethical shortcuts, such as AI manipulation, poses a threat to this foundation. The scientific community must resist these temptations to preserve the credibility of their work. The pressures facing researchers, including increased workloads and heightened scrutiny, may drive some to exploit AI. Yet, these pressures should not justify compromising ethical standards. As AI becomes more integrated into research, it is vital to establish clear regulations governing its use. This will ensure that science remains a bastion of truth and integrity, free from deceptive practices that could undermine public trust. 'They Cloned a Yak in the Himalayas!': Chinese Scientists Defy Nature with First-Ever Livestock Copy at 12,000 Feet Charting a Course Toward Responsible AI Use The integration of AI into scientific processes demands careful consideration and responsible use. As highlighted by Hiroaki Sakuma, an AI expert, industries must develop comprehensive guidelines for AI application, particularly in research and peer review. Such guidelines will help navigate the ethical complexities of AI, ensuring it serves as a tool for advancement rather than manipulation. While AI holds the potential to revolutionize research, its implementation must be guided by a commitment to ethical standards. The scientific community must engage in ongoing dialogue to address the challenges posed by AI, fostering a culture of transparency and accountability. Only through these measures can science continue to thrive as a pillar of progress, innovation, and truth. As the intersection of AI and scientific research continues to evolve, how can the academic community ensure that technological advancements enhance rather than undermine the integrity of scientific inquiry? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (26)

H&M Group's Linda Leopold steps down as head of AI strategy after seven years
H&M Group's Linda Leopold steps down as head of AI strategy after seven years

Fashion Network

time3 days ago

  • Fashion Network

H&M Group's Linda Leopold steps down as head of AI strategy after seven years

Linda Leopold exits H&M Group after seven years leading its AI strategy, including its Responsible AI program. She now focuses on consulting, writing, and speaking on the ethical implications of AI in tech, fashion, and beyond. H&M Group, the Swedish fashion giant known for its global retail footprint and tech-forward initiatives, has announced the departure of Linda Leopold, who served as Head of AI Strategy. After seven years in strategic leadership roles, Leopold is stepping down to focus on consulting, writing, and speaking engagements centered on artificial intelligence and its ethical development across industries. Leopold joined H&M Group in 2018 and held several key roles within the company's growing AI division. As Head of AI Policy, she played a critical role in launching and expanding the brand's Responsible AI program. Under her guidance, H&M Group established frameworks for digital ethics and adopted strategic approaches to implementing generative AI technologies. 'These years were extraordinary—not only because I had the opportunity to help shape H&M's AI direction, but also because I witnessed AI evolve at lightning speed,' Leopold wrote on LinkedIn. 'I'm particularly proud of building the Responsible AI program from the ground up and contributing to the global conversation on ethical AI.' Her leadership earned international recognition. In 2022, Forbes named her one of the world's nine most influential women in AI. Before her time at H&M Group, Leopold worked as an innovation strategist bridging fashion and technology and also served as editor-in-chief of the Scandinavian fashion and culture magazine Bon. 'Now it's time for the next chapter,' she added. 'With AI at such a pivotal point, I want to help guide its development across different industries and organizations.' Leopold's exit comes as H&M Group continues its push into digital innovation. Earlier this month, the brand launched a new denim capsule collection powered by digital twin technology —part of a larger strategy to integrate generative AI into storytelling and customer engagement. According to Chief Creative Officer Jörgen Andersson, the goal is to create emotional connections with consumers without diluting brand identity. The first drop debuted on July 2 via H&M's global online store, with more launches planned this fall. While investing in new technologies, H&M Group also faces mounting economic pressures. The company reported a 5% year-over-year decline in net sales for the second quarter, falling to SEK 56.7 billion. However, operating profit rose slightly to SEK 5.9 billion—beating analyst forecasts. The group also improved inventory management, though deeper price cuts are expected in the third quarter as customers become more cautious with spending. 'We're seeing greater price sensitivity among customers due to ongoing uncertainty,' Group CEO Daniel Erver said during the latest earnings call.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store