logo
Watch Video: Air defence units and drone strikes visible in Srinagar as Pakistan violates ceasefire

Watch Video: Air defence units and drone strikes visible in Srinagar as Pakistan violates ceasefire

Business Upturn10-05-2025

Amid what was supposed to be a ceasefire agreement between India and Pakistan from 5 PM IST on May 10, fresh hostilities erupted late Friday night as multiple explosions were reported across Srinagar. Visuals captured from the ground show Indian air defence units in action, reportedly intercepting hostile drones and aerial threats.
Videos shared by journalist Aditya Raj Kaul from Udhampur and Srinagar showed tracer fire lighting up the night sky, confirming engagement by India's anti-air systems. In one video, rapid bursts of gunfire can be heard as searchlights scan the skies, indicating a possible drone or missile threat.
Jammu & Kashmir Chief Minister Omar Abdullah also reacted with alarm on X (formerly Twitter), posting:
'What the hell just happened to the ceasefire? Explosions heard across Srinagar!!!'
In another tweet shortly after, the CM added:
'This is no ceasefire. The air defence units in the middle of Srinagar just opened up.'
Earlier in the day, officials had confirmed a ceasefire agreement initiated through a DGMO-level conversation between India and Pakistan, with both sides agreeing to halt all firing and military activity on land, air, and sea. However, Friday night's escalation raises serious concerns about the sustainability of the truce.
Security forces have placed Srinagar and surrounding areas under red alert, and blackouts have been enforced across several sectors including Lal Chowk, BB Cantt area, and Safapora. Citizens have been urged to stay indoors and follow official instructions.
The situation remains tense and evolving. Updates are expected as more clarity emerges from defence and home ministry sources.
This is no ceasefire. The air defence units in the middle of Srinagar just opened up. pic.twitter.com/HjRh2V3iNW
— Omar Abdullah (@OmarAbdullah) May 10, 2025
BREAKING: Shooting this live drone engagement over Srinagar with my phone right now: pic.twitter.com/Srb3FDAhv6
— Shiv Aroor (@ShivAroor) May 10, 2025
Aditya Bhagchandani serves as the Senior Editor and Writer at Business Upturn, where he leads coverage across the Business, Finance, Corporate, and Stock Market segments. With a keen eye for detail and a commitment to journalistic integrity, he not only contributes insightful articles but also oversees editorial direction for the reporting team.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Indian MPs express concern over terror attack in Boulder, Colorado; Shashi Tharoor says ‘terror has no place'
Indian MPs express concern over terror attack in Boulder, Colorado; Shashi Tharoor says ‘terror has no place'

Business Upturn

time4 hours ago

  • Business Upturn

Indian MPs express concern over terror attack in Boulder, Colorado; Shashi Tharoor says ‘terror has no place'

By News Desk Published on June 2, 2025, 08:32 IST Members of the Indian MPs' delegation have expressed concern over the recent terror attack in Boulder, Colorado, which occurred on Monday. Senior Congress leader and Member of Parliament Shashi Tharoor posted on social media that the delegation learned about the incident 'with concern' and expressed relief that there was no loss of life reported in the attack. 'Members of the Indian MPs' delegation learned with concern about the terror attack in Boulder, Colorado today. We are relieved there was no loss of life,' Tharoor wrote on X (formerly Twitter). He further added that the Indian delegation stands in agreement with US Secretary of State Marco Rubio's view that 'terror has no place' in either country. 'We all share Secy of State @SecRubio's view that 'terror has no place' in our countries,' Tharoor added. The Boulder incident has drawn swift condemnation, though US authorities are still investigating the full details of the attack. Indian leaders have consistently expressed solidarity with the US on counter-terrorism efforts, and today's statement reinforces this shared stance. News desk at

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

Yahoo

time6 hours ago

  • Yahoo

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

Yahoo

time6 hours ago

  • Yahoo

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store