logo
Tech-fueled misinformation distorts Iran-Israel fighting

Tech-fueled misinformation distorts Iran-Israel fighting

Yahoo3 hours ago

AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods -- such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media.
The information warfare unfolding alongside ground combat -- sparked by Israel's strikes on Iran's nuclear facilities and military leadership -- underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a "surge in generative AI misinformation, specifically related to the Iran-Israel conflict," Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
"These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication."
- 'Photo-realism' -
GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos -- depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer -- to Google's Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show "the moment an Iranian missile" struck Tel Aviv.
"It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion," said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
"This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share," he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims -- ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
- 'Control the narrative' -
"We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience," McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as "trapped in a sealed information environment," where state media outlets dominate in a chaotic attempt to "control the narrative."
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as "fake news."
Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
"This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content," BitMindAI's Miyachi said.
"There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse."
burs-ac/jgc

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen
Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

Yahoo

time25 minutes ago

  • Yahoo

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

When you buy through links on our articles, Future and its syndication partners may earn a commission. Masayoshi Son, founder of SoftBank Group, is working on plans to develop a giant AI and manufacturing industrial hub in Arizona, potentially costing up to $1 trillion if it reaches full scale, reports Bloomberg. The concept of what is internally called Project Crystal Land involves creating a complex for building artificial intelligence systems and robotics. Son has talked to TSMC, Samsung, and the Trump administration about the project. Masayoshi Son's Project Crystal Land aims to replicate the scale and integration of China's Shenzhen by establishing a high-tech hub focused on manufacturing AI-powered industrial robots and advancing artificial intelligence technologies. The site would host factories operated by SoftBank-backed startups specializing in automation and robotics, Vision Fund portfolio companies (such as Agile Robots SE), and potentially involve major tech partners like TSMC and Samsung. If fully realized, the project could cost up to $1 trillion and is intended to position the U.S. as a leading center for AI and high-tech manufacturing. SoftBank is looking to include TSMC in the initiative, given its role in fabricating Nvidia's AI processors. However, a Bloomberg source familiar with TSMC's internal thinking indicated that the company's current plan to invest $165 billion in total in its U.S. projects has no relation to SoftBank's projects. Samsung Electronics has also been approached about participating, the report says. Talks have been held with government officials to explore tax incentives for companies investing in the manufacturing hub. This includes communication with Commerce Secretary Howard Lutnick, according to Bloomberg. SoftBank is reportedly seeking support at both the federal and state levels, which could be crucial to the success of the project. The development is still in the early stages, and feasibility will depend on private sector interest and political support, sources familiar with SoftBank's plans told Bloomberg. To finance its Project Crystal Land, SoftBank is considering project-based financing structures typically used in large infrastructure developments like pipelines. This approach would enable fundraising on a per-project basis and reduce the amount of upfront capital required from SoftBank itself. A similar model is being explored for the Stargate AI data center initiative, which SoftBank is jointly pursuing with OpenAI, Oracle, and Abu Dhabi's MGX. Melissa Otto of Visible Alpha suggested in a Bloomberg interview that rather than spending heavily, Son might more efficiently support his AI project by fostering partnerships between manufacturers, AI engineers, and specialists in fields like medicine and robotics, and by backing smaller startups. However, she notes that investing in data centers could also reduce AI development costs and drive wider adoption, which would be good for the long term for AI in general and Crystal Land specifically. Nonetheless, it is still too early to judge the outcome. The rumor about the Crystal Land project has emerged as SoftBank is expanding its investments in AI on an already large scale. The company is preparing a $30 billion investment in OpenAI and a $6.5 billion acquisition of Ampere Computing, a cloud-native CPU company. While these initiatives are actively developing, the pace of fundraising for the Stargate infrastructure has been slower than initially expected. SoftBank's liquidity at the end of March stood at approximately ¥3.4 trillion ($23 billion). To increase available funds, the company recently sold about a quarter of its T-Mobile U.S. stake, raising $4.8 billion. It also holds ¥25.7 trillion ($176.46 billion) in net assets, the largest portion of which is in chip designer Arm Holdings. Such vast resources provide SoftBank with room to secure additional financing if necessary, Bloomberg notes Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Business Insider

time41 minutes ago

  • Business Insider

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses
Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

Yahoo

timean hour ago

  • Yahoo

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

OpenAI CEO Sam Altman said Meta was trying to poach AI talent with $100M signing bonuses. Meta CTO Andrew Bosworth told CNBC that Altman didn't mention how OpenAI was countering offers. Bosworth said the market rate he's seeing for AI talent has been "unprecedented." OpenAI's Sam Altman recently called Meta's attempts to poach top AI talent from his company with $100 million signing bonuses "crazy." Andrew Bosworth, Meta's chief technology officer, says OpenAI has been countering those crazy offers. Bosworth said in an interview with CNBC's "Closing Bell: Overtime" on Friday that Altman "neglected to mention that he's countering those offers." The OpenAI CEO recently disclosed how Meta was offering massive signing bonuses to his employees during an interview on his brother's podcast, "Uncapped with Jack Altman." The executive said "none of our best people" had taken Meta's offers, but he didn't say whether OpenAI countered the signing bonuses to retain those top employees. OpenAI and Meta did not respond to requests for comment. The Meta CTO said these large signing bonuses are a sign of the market setting a rate for top AI talent. "The market is setting a rate here for a level of talent which is really incredible and kind of unprecedented in my 20-year career as a technology executive," Bosworth said. "But that is a great credit to these individuals who, five or six years ago, put their head down and decided to spend their time on a then-unproven technology which they pioneered and have established themselves as a relatively small pool of people who can command incredible market premium for the talent they've raised." Meta, on June 12, announced that it had bought a 49% stake in Scale AI, a data company, for $14.8 billion as the social media company continues its artificial intelligence development. Business Insider's chief media and tech correspondent Peter Kafka noted that the move appears to be an expensive acquihire of Scale AI's CEO, Alexandr Wang, and some of the data company's top executives. Bosworth told CNBC that the large offers for AI talent will encourage others to build their expertise and, as a result, the numbers will look different in a couple of years. "But today, it's a relatively small number and I think they've earned it," he said. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store