logo
What does the world's first AI worm mean for you?

What does the world's first AI worm mean for you?

Hindustan Times07-06-2025
The Creeper program is generally considered the world's first computer virus. Born as an experiment in 1971, it infected computers and slowed operations to a crawl. Reaper was the world's first antivirus, designed to destroy it.
The battle has waged on ever since.
There was the Brain virus from 1986, which spread through floppy disks and flashed alarming messages of infection on home computers (while also slowing them down). Then came the Morris Worm, a self-replicating program created at Cornell University, as part of an unofficial experiment, in 1988. It swept the world, slowing computers down to such a degree that a single email could take days to send.
Fast-forward to 2024, and a new beast has emerged, from a joint experiment conducted by Cornell, the Technion-Israel Institute of Technology and the software company Intuit. Named Morris II, it is being called the world's first generative AI worm.
The self-replicating bug has shown the ability to spread rapidly through AI-powered email, and target generative AI platforms such as ChatGPT and Google Gemini.
The way it creeps around is quite sinister. It essentially hides, somewhat like a shadow, in common chatbot and AI assistant prompts. When this prompt is issued by a generative AI model, and accepted by a user, it triggers a shadow instruction alongside. The shadow prompt may instruct the AI program to hand over data, alter code, or help the worm itself replicate.
Additionally, any time such a prompt is used to create an email or other such output, the worm spreads seamlessly to every recipient of that content.
Morris II isn't out there prowling yet. Researchers Ben Nassi, Stav Cohen and Ron Bitton created it in a controlled environment for the same reason many of their predecessors did this: to highlight levels of risk, and raise an alarm.
The really alarming thing, this time around, is that the worm may wreak its damage invisibly, without the user ever knowing it was there. It could also potentially 'learn' as it goes, finding new ways to infiltrate systems, and evade detection.
In a report released in April, cyber-security company Check Point Software has already noted that AI-driven malware could exploit vulnerabilities in real time, making traditional signature-based antivirus tools nearly obsolete.
This cat-and-mouse game isn't new. Traditional worms such as WannaCry, the ransomware that first appeared in 2017, cost billions in damages across hospitals, banks and governments, in this way. But AI raises the stakes
Track and shield
Unsurprisingly, security firms are already racing to harness AI to outsmart AI. Amid this race, consultancy firm McKinsey estimates that cybersecurity solutions, which companies around the world spent about $150 billion on in 2021, could soon be a market worth as much as $2 trillion.
Among the weapons emerging on the good side, in this battle, are virtual private networks or VPNs. In addition to anonymising a user's web-browsing data and providing a layer of security for information sent and received, companies such as ExpressVPN, Proton and Nord are evolving to offer clients solutions that will protect not just smartphones and computing devices but also smart TVs, appliances and home systems.
ExpressVPN, for instance, rolled out an 'AI shield' late last year that uses artificial intelligence to predict and neutralise zero-day exploits (which is when a new bug or vulnerability in a system is exploited, in the hours before it is fixed).
Traditional antivirus companies are responding to the shifting landscape too. Market leaders such as McAfee and Norton are working to provide advanced AI-led protection against AI-led threats, with a special focus on text messages, phone calls, email and web browser use.
We aren't at the point of dos and don'ts yet. It is still unclear what the threats may look like. But watch this space. It will pay to know all you can.
Because this time, it may not even take a click from you to change your world.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cornell researchers working on light-watermarking tactic to detect AI videos
Cornell researchers working on light-watermarking tactic to detect AI videos

The Hindu

time10 hours ago

  • The Hindu

Cornell researchers working on light-watermarking tactic to detect AI videos

Cornell researchers have proposed a way to help forensic experts tell AI-manipulated videos from genuine ones by using specially designed light sources at key events that reveal when videos have been morphed. A paper titled, 'Noise-Coded Illumination for Forensic and Photometric Video,' describes how light sources featured in a video could be encoded secretly through visual noise fluctuations. In essence, this would watermark the light source itself, rather than individually trying to watermark every video shot at an event in order to prevent these clips from being morphed. These curated light sources have a 'secret code' that can be used to verify the video authenticity and see if the visuals have been manipulated. Computer scientist and graduate student Peter Michael led this work on Noise-Coded Illumination (NCI). 'Our approach effectively adds a temporal watermark to any video recorded under coded illumination. However, rather than encoding a specific message, this watermark encodes an image of the unmanipulated scene as it would appear lit only by the coded illumination,' stated the paper. This tactic would allow forensic experts to compare a manipulated video to an easily accessible version of the original, instead of having to search for the source material manually. 'When an adversary manipulates video captured under coded illumination, they unwittingly change the code images contained therein. Knowing the codes used by each light source lets us recover and examine these code images, which we can use to identify and visualize manipulation,' stated the paper. The work noted that such an approach could be useful for public events and interviews, to prevent clips of these key meetings from being morphed. However, the success of the venture depends on the widespread adoption of the specially designed lights. As AI-generated videos or AI-morphed clips become more realistic, experts are looking at more ways to watermark original content. However, the need of the hour is a watermarking method that even malicious attackers cannot remove from the videos they work with.

Deepfakes are really dangerous but this small process could make them easier to catch
Deepfakes are really dangerous but this small process could make them easier to catch

Hindustan Times

time12 hours ago

  • Hindustan Times

Deepfakes are really dangerous but this small process could make them easier to catch

Worried about deepfakes and how they can impact our societies, systems, and political processes? This simple fix could be the key to address this issue. A team at Cornell University has shown that you can watermark reality itself by using light, not software. Instead of embedding a signature in a file that a bad actor can strip or ignore, they embed a quiet code in the scene while you record. Lamps or panels fitted with a tiny controller nudge brightness in patterns that people do not notice, yet cameras do. The camera captures those fluctuations as part of the image. Later, anyone with the matching key can recover a low fidelity code stream from the footage and check whether it lines up with the scene. If a face was swapped, an object pasted in, or a section reshot, the code in that region will not match. What you get is a built in authenticity check that travels with the frames and does not rely on downstream cooperation from platforms or models. The lights look ordinary yet they quietly embed a code the camera sees, letting later edits reveal themselves.(Unsplash) How the light code works and why it helps At capture, the system gently modulates one or more lights with a pseudorandom sequence. The variations sit below human perception, so the scene looks normal to viewers in the room and on camera. Because the camera sensor integrates that light, the code becomes part of every frame. During verification, software extracts a reference signal from the footage and compares it with the expected pattern. A clean match says the scene was recorded under the coded lights. A mismatch highlights regions that do not belong. The clever twist is that you can run different codes on different fixtures in the same scene. That makes life difficult for forgers because any edit has to respect multiple overlapping light signatures, frame by frame, across moving subjects and changing shadows, according to Interesting Engineering. File based watermarks and metadata have never solved this. They depend on compliant software and can be stripped, reencoded, or never added. A light borne signature raises the bar in settings where truth matters most, such as interviews, debates, press briefings, and courtroom recordings. It does not stop every attack, and it will not fix content that was never lit with the system, but it shifts trust earlier in the chain and makes convincing edits costlier and slower to produce. Where this could land next The practical upside is that you do not need to replace cameras. You retrofit lights. A postage stamp sized controller can live inside a studio panel, a conference room downlight, or a stage fixture. Newsrooms can light sets with coded patterns that look normal on air. Event organisers can enable coded lighting for high stakes appearances without changing run sheets. Fact checkers can ask sources to supply a short verification clip alongside raw footage, which speeds reviews and reduces guesswork. Standards bodies can define open keys and audit trails so that verification scales beyond a single lab and works across vendors. None of this is a silver bullet. Lighting can drift. Keys can leak. Outdoor scenes are harder to control, and the method needs care around skin tone rendering and flicker. The Cornell team frames it as a layer, not a lock. Pair it with provenance logs, capture time attestations, and robust forensic models, and you get a defence in depth that prioritises trust at the moment of recording rather than a late scramble after a video goes viral. In a year when election content will be tested by cheap synthesis, a watermark carried by photons is a refreshingly simple way to make fakes work harder and truth easier to prove.

Study of blood samples reveal biomarkers, may help develop blood test for chronic fatigue syndrome
Study of blood samples reveal biomarkers, may help develop blood test for chronic fatigue syndrome

Time of India

time14 hours ago

  • Time of India

Study of blood samples reveal biomarkers, may help develop blood test for chronic fatigue syndrome

New Delhi: An analysis of blood samples from patients of chronic fatigue syndrome has revealed substances, or ' biomarkers ', indicating the condition -- a finding which is seen as a "concrete step" towards developing a diagnostic blood test . Researchers from Cornell University , US, said that in the absence of lab tests for diagnosing the condition, doctors have to rely on patients reporting symptoms such as exhaustion, dizziness, disturbed sleep and 'brain fog'. The team explained that when a cell dies, it leaves behind marks or "fingerprints" -- genetic material released into blood plasma, tissue injury and signs of other biological processes. "By reading the molecular fingerprints that cells leave behind in blood, we've taken a concrete step toward a test for (myalgic encephalomyelitis/chronic fatigue syndrome) ME/CFS ," said Iwijn De Vlaminck, associate professor of biomedical engineering at Cornell University and co-senior author of the study published in the journal Proceedings of the National Academy of Sciences. "This study shows that a tube of blood can provide clues about the disease's biology," De Vlaminck said. From blood samples collected from patients of chronic fatigue syndrome and healthy people, the researchers sequenced RNA particles in blood plasma released due to cell damage and death. About 700 significant differences between the RNA material of the two study groups were found, revealing signs of a dysregulation in the immune system and exhaustion of T-cells (which help fight infections), among others, in patients suffering from chronic fatigue syndrome. The results were also used in developing an artificial intelligence model, which "achieved an accuracy of 77 per cent" in detecting signs specific to chronic fatigue syndrome. The accuracy rate may not be high enough for a diagnostic test yet, but it is a substantial leap forward in the field, the researchers said. Chronic fatigue syndrome "affects a lot of different parts of the body. The nervous system, immune system, cardiovascular system. Analysing plasma gives you access to what's going on in those different parts," said co-senior author Maureen Hanson, professor in the department of molecular biology and genetics, Cornell University.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store