logo
Deepfakes are really dangerous but this small process could make them easier to catch

Deepfakes are really dangerous but this small process could make them easier to catch

Hindustan Times2 days ago
Worried about deepfakes and how they can impact our societies, systems, and political processes? This simple fix could be the key to address this issue. A team at Cornell University has shown that you can watermark reality itself by using light, not software. Instead of embedding a signature in a file that a bad actor can strip or ignore, they embed a quiet code in the scene while you record. Lamps or panels fitted with a tiny controller nudge brightness in patterns that people do not notice, yet cameras do. The camera captures those fluctuations as part of the image. Later, anyone with the matching key can recover a low fidelity code stream from the footage and check whether it lines up with the scene. If a face was swapped, an object pasted in, or a section reshot, the code in that region will not match. What you get is a built in authenticity check that travels with the frames and does not rely on downstream cooperation from platforms or models. The lights look ordinary yet they quietly embed a code the camera sees, letting later edits reveal themselves.(Unsplash)
How the light code works and why it helps
At capture, the system gently modulates one or more lights with a pseudorandom sequence. The variations sit below human perception, so the scene looks normal to viewers in the room and on camera. Because the camera sensor integrates that light, the code becomes part of every frame. During verification, software extracts a reference signal from the footage and compares it with the expected pattern. A clean match says the scene was recorded under the coded lights. A mismatch highlights regions that do not belong. The clever twist is that you can run different codes on different fixtures in the same scene. That makes life difficult for forgers because any edit has to respect multiple overlapping light signatures, frame by frame, across moving subjects and changing shadows, according to Interesting Engineering. File based watermarks and metadata have never solved this. They depend on compliant software and can be stripped, reencoded, or never added. A light borne signature raises the bar in settings where truth matters most, such as interviews, debates, press briefings, and courtroom recordings. It does not stop every attack, and it will not fix content that was never lit with the system, but it shifts trust earlier in the chain and makes convincing edits costlier and slower to produce.
Where this could land next
The practical upside is that you do not need to replace cameras. You retrofit lights. A postage stamp sized controller can live inside a studio panel, a conference room downlight, or a stage fixture. Newsrooms can light sets with coded patterns that look normal on air. Event organisers can enable coded lighting for high stakes appearances without changing run sheets. Fact checkers can ask sources to supply a short verification clip alongside raw footage, which speeds reviews and reduces guesswork. Standards bodies can define open keys and audit trails so that verification scales beyond a single lab and works across vendors. None of this is a silver bullet. Lighting can drift. Keys can leak. Outdoor scenes are harder to control, and the method needs care around skin tone rendering and flicker. The Cornell team frames it as a layer, not a lock. Pair it with provenance logs, capture time attestations, and robust forensic models, and you get a defence in depth that prioritises trust at the moment of recording rather than a late scramble after a video goes viral. In a year when election content will be tested by cheap synthesis, a watermark carried by photons is a refreshingly simple way to make fakes work harder and truth easier to prove.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Deepfakes are really dangerous but this small process could make them easier to catch
Deepfakes are really dangerous but this small process could make them easier to catch

Hindustan Times

time2 days ago

  • Hindustan Times

Deepfakes are really dangerous but this small process could make them easier to catch

Worried about deepfakes and how they can impact our societies, systems, and political processes? This simple fix could be the key to address this issue. A team at Cornell University has shown that you can watermark reality itself by using light, not software. Instead of embedding a signature in a file that a bad actor can strip or ignore, they embed a quiet code in the scene while you record. Lamps or panels fitted with a tiny controller nudge brightness in patterns that people do not notice, yet cameras do. The camera captures those fluctuations as part of the image. Later, anyone with the matching key can recover a low fidelity code stream from the footage and check whether it lines up with the scene. If a face was swapped, an object pasted in, or a section reshot, the code in that region will not match. What you get is a built in authenticity check that travels with the frames and does not rely on downstream cooperation from platforms or models. The lights look ordinary yet they quietly embed a code the camera sees, letting later edits reveal themselves.(Unsplash) How the light code works and why it helps At capture, the system gently modulates one or more lights with a pseudorandom sequence. The variations sit below human perception, so the scene looks normal to viewers in the room and on camera. Because the camera sensor integrates that light, the code becomes part of every frame. During verification, software extracts a reference signal from the footage and compares it with the expected pattern. A clean match says the scene was recorded under the coded lights. A mismatch highlights regions that do not belong. The clever twist is that you can run different codes on different fixtures in the same scene. That makes life difficult for forgers because any edit has to respect multiple overlapping light signatures, frame by frame, across moving subjects and changing shadows, according to Interesting Engineering. File based watermarks and metadata have never solved this. They depend on compliant software and can be stripped, reencoded, or never added. A light borne signature raises the bar in settings where truth matters most, such as interviews, debates, press briefings, and courtroom recordings. It does not stop every attack, and it will not fix content that was never lit with the system, but it shifts trust earlier in the chain and makes convincing edits costlier and slower to produce. Where this could land next The practical upside is that you do not need to replace cameras. You retrofit lights. A postage stamp sized controller can live inside a studio panel, a conference room downlight, or a stage fixture. Newsrooms can light sets with coded patterns that look normal on air. Event organisers can enable coded lighting for high stakes appearances without changing run sheets. Fact checkers can ask sources to supply a short verification clip alongside raw footage, which speeds reviews and reduces guesswork. Standards bodies can define open keys and audit trails so that verification scales beyond a single lab and works across vendors. None of this is a silver bullet. Lighting can drift. Keys can leak. Outdoor scenes are harder to control, and the method needs care around skin tone rendering and flicker. The Cornell team frames it as a layer, not a lock. Pair it with provenance logs, capture time attestations, and robust forensic models, and you get a defence in depth that prioritises trust at the moment of recording rather than a late scramble after a video goes viral. In a year when election content will be tested by cheap synthesis, a watermark carried by photons is a refreshingly simple way to make fakes work harder and truth easier to prove.

From 10 feet away, AI can now listen to your phone calls using..., researchers say their goal is to make people...; Is our privacy in danger?
From 10 feet away, AI can now listen to your phone calls using..., researchers say their goal is to make people...; Is our privacy in danger?

India.com

time2 days ago

  • India.com

From 10 feet away, AI can now listen to your phone calls using..., researchers say their goal is to make people...; Is our privacy in danger?

There are many software tools around in the world that can detect what people are talking about on the phone. Cases of phone tapping are common, and the Pegasus spyware controversy made headlines a few years ago. But now, technology has advanced so much that you don't even need spyware to listen to someone's phone conversation. With the help of AI, conversations can be picked up from as far as 10 feet away—just by detecting the vibrations from the phone. This has raised serious concerns about privacy. Millimeter-wave radar and AI make it possible According to a report by Interesting Engineering , researchers at Penn State University's computer science department have found a surprising new way to eavesdrop on phone calls. They used millimeter-wave radar along with artificial intelligence to detect tiny vibrations coming from a phone's earpiece and convert them into speech. This technology can track conversations from 10 feet away with about 60 per cent accuracy. Millimeter-wave radar turning vibrations into words The researchers used millimeter-wave radar, the same type of technology used in self-driving cars, motion sensors, and 5G networks. When we talk on the phone, the sound from the earpiece causes tiny vibrations in the phone's body. This radar can detect those vibrations. With the help of AI, the vibrations are then analyzed and converted into text. Researcher Suryoday Basak explained that by capturing these vibrations with radar and using machine learning, it's possible to figure out the conversation. While the technology isn't 100 per cent accurate yet, it is accurate enough to leak sensitive information. 60 per cent of the conversation was correct In their trials, researchers placed the radar about 3 meters (roughly 10 feet) away from the phone. From this distance, the radar detected tiny vibrations from the phone, and AI converted the data into about 10,000 words of text. Around 60 per cent of the conversation matched accurately. While it's not fully precise yet, if this technology improves in the future, it could become a serious threat to privacy. Big privacy concerns This development has sparked serious privacy questions. Researcher Suryoday Basak explained that it works a bit like lip reading, just as lip readers can guess what's being said, this system can do the same with vibrations. The real danger comes if someone with bad intentions uses it. The researchers say their goal is to make people aware of the risk so they can be more careful during sensitive conversations.

Shyam Sankar: Disruptor-in-chief
Shyam Sankar: Disruptor-in-chief

The Hindu

time5 days ago

  • The Hindu

Shyam Sankar: Disruptor-in-chief

In Silicon Valley, where startups often burn bright and vanish fast, Palantir Technologies has defied the odds. Over the past year, the software company's stock has soared more than 600%, making it the best-performing AI name in the S&P 500 and one of the decade's biggest tech-success stories. On August 9, shares closed a record $186.96, pushing the company's market cap north of $443 billion. At the centre of this rise is Shyam Sankar, Palantir's Mumbai-born Chief Technology Officer. On July 25, his net worth crossed $1.3 billion as the company's stock soared. Raised in Orlando, Mr. Sankar earned a BS in electrical and computer engineering from Cornell University and an MS in management science and engineering from Stanford University. Known as a 'slayer of bureaucracy', he has spent over two decades building disruptive software and AI solutions for government and private clients. Mr. Sankar first learned about Palantir when a friend mentioned a small, stealthy, yet exciting software start-up looking for its first business hire in a largely technical role. The friend introduced him to one of the founders, and after seeing version 0.7 of the app, meeting a team of 'brilliant' people, and hearing about the company's mission, Mr. Sankar knew exactly where he wanted to be. Founded in 2003 by Peter Thiel — a crucial backer of Donald Trump's first presidential campaign — along with Alex Karp, Joe Lonsdale, and Stephen Cohen, the Silicon Valley unicorn was initially funded by In-Q-Tel, the CIA's venture capital arm. It built its early reputation serving the U.S. government, particularly national security agencies, with a founding vision of harnessing man–machine symbiosis to help American and allied intelligence communities share data securely and prevent another 9/11 without compromising civil liberties. Since joining Palantir in 2006 as its 13th employee, Mr. Sankar has pioneered the 'forward deployed engineer' model — embedding engineers directly with clients to tackle urgent, real-world challenges in real time. This approach was key to the success of Palantir's business model. Now headquartered in Denver, Palantir's platforms include Gotham, Foundry, and its Artificial Intelligence Platform (AIP). Its name, drawn from J.R.R. Tolkien's Lord of the Rings, refers to 'seeing stones' that reveal hidden truths. Inside the company, the unofficial motto — 'Save the Shire' — reflects its mission in plain terms: protect America from threats. Palantir's technology centralises and analyses large and disparate datasets, with applications ranging from tracking enemy drones for soldiers to monitoring ship parts for sailors, to assisting health officials in processing drug approvals. In 2020, it went public via a direct listing on the New York Stock Exchange. When the company's profile and operations expanded, so did Mr. Sankar's role. Leadership role In January 2023, he was made the CTO and executive vice-president. 'Under his leadership, Palantir transformed from a Silicon Valley start-up to a global, industry leading software and AI company,' reads Mr. Sankar's profile on his Substack page. Today, Palantir counts more than 30 U.S. federal agencies and a group of Fortune 500 companies as clients. In the second quarter of 2025, it posted $1 billion in revenue — up 48% from a year earlier — beating Wall Street forecasts. In the first half of 2025, it pulled in more than $322 million from federal contracts, a 12% increase from two years earlier. The U.S. Army, once an adversary in a contracting dispute, has become one of its biggest customers. In June, Mr. Sankar himself was commissioned into the Army Reserve, a symbolic move that underscored Palantir's alignment with military priorities. Some Pentagon officials have voiced concern about over-reliance on a single contractor for core data-processing needs. Palantir's reputation as a rapid-response problem solver was cemented during crises. At the height of the COVID-19 pandemic, it built systems to track the virus and vaccine distribution. After Russia's invasion of Ukraine, Palantir's technology got integrated into multiple Ukrainian government and military agencies. Similarly, days after the Hamas-led attack on Israel in October 2023, Mr. Karp — who is Jewish — flew with senior executives to Tel Aviv. Following a January 10 meeting with Israel's Defence Ministry officials, Palantir entered a strategic partnership with Israel to provide technology to aid its war efforts. The move drew criticism from pro-Palestinian activists in the U.S. At home, Palantir drew flak for a government contract to build an app that integrates data from across the government to assist with immigration enforcement. With debates still simmering, Palantir is looking beyond U.S. borders. It is pursuing lucrative contracts in Saudi Arabia, from overhauling the country's healthcare system to helping build Neom, a futuristic megacity in the desert that has collided with practical and financial challenges. But for Mr. Sankar and Mr. Karp, controversies are part of Palantir's DNA. Regarding working with different government agencies on data processing and other projects, Mr. Sankar once said, it's like 'shining a light on the battle space'. 'The things that you couldn't see before, you could see now...'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store