Latest news with #AmitRelan


Time of India
2 days ago
- Business
- Time of India
Air India plane crash: AI-generated fake reports, videos spreading misinformation; fraudsters exploiting vulnerability
Air India plane crash (Picture credit: AP) In the aftermath of the tragic crash of Air India Flight 171 in Ahmedabad that claimed 275 lives on June 12, the spread of misinformation, powered by AI-generated content, has stirred fresh concerns about digital disinformation during crises. A fake preliminary crash report, bearing aviation jargon and even emoji, went viral across aviation circles before being refuted by authorities, reported ET. The document was later found to be generated by artificial intelligence using details from a 2024 LATAM Airlines incident in South America. Before the Indian government could label the report fake, news websites had already published stories based on it, misleading even aviation professionals. According to the ministry, the Aircraft Accident Investigation Bureau (AAIB) retrieved and transported the cockpit voice recorder (CVR) and flight data recorder (FDR) to New Delhi on June 24, over a week after their recovery but offered no explanation for the delay, as per ET. The information vacuum was quickly filled with fabricated visuals and narratives. Amit Relan, CEO of digital fraud detection firm mFilterIt was quoted by ET as saying, 'We've observed a disturbing pattern in how bad actors are leveraging AI and social media platforms to spread misinformation and commit fraud during sensitive events like the Air India Flight 171 crash.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like New cars, best deals! CarXplore Shop Now Undo His firm identified not only deepfake videos of the crash aftermath but also fraudulent fundraising campaigns. 'This is a classic case of emotionally-driven financial fraud,' Relan warned. Fact-checking group BOOM also flagged several AI-generated visuals, including doctored images showing the aircraft ablaze or falsely positioned outside Ahmedabad airport. These images, lacking disclaimers, were flagged by AI detectors as synthetic. BOOM confirmed the misleading nature of such content using AI verification tools. Former airline pilot and crash investigation consultant John Cox criticised the AAIB's slow communication. 'This is the most extensive case of misinformation that has been seen during any accident,' he told ET. 'The AAIB should be having daily briefings as done by agencies across the globe. Because in the absence of information, it is misinformation that fills the void.' The International Civil Aviation Organisation (ICAO) also highlights the need for effective media communication during accident investigations, stating that 'a well-planned and executed communication strategy can go a long way in minimising negative publicity and ensuring facts are reported in a timely and accurate manner.' Mishi Choudhary, founder of the Software Freedom Law Centre, stressed a multi-layered approach. 'Each new disaster now presents new opportunities for disinformation peddlers,' she said. 'This is not a problem that can be solved by enacting new laws. Platforms need to take responsibility by investing more to tackle misinformation in different languages.' As AI-generated content becomes more convincing and accessible, experts say India must adopt faster, transparent, and tech-integrated communication strategies, especially during national tragedies, to curb the rising tide of digital deception. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
2 days ago
- Business
- Time of India
AI-generated content fuels misinformation after Air India crash
Days after the crash of an Air India Boeing 787 plane in Ahmedabad, killing 275 people, a preliminary investigation report was found circulating in aviation circles. The report, except for the emoji, appeared genuine with various aviation terminologies, giving it a professional look. However, a closer scrutiny by trained eyes would eventually reveal that it was created by an AI platform using details from a 2024 incident with South American airlines LATAM, giving it a convincing look. By the time the Indian government refuted the report as fake, news websites had already run headlines, clouding the minds of the public, and even that of several pilots. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Expert: "Throw Away Your Adhesive Cream and Use This Instead! Dentalcare Guide Undo With the June 12 Air India crash stirring public uproar amid scarce post-accident updates, a wave of false information, including pictures and videos created by generative AI, has swarmed the internet. 'We've observed a disturbing pattern in how bad actors are leveraging AI and social media platforms to spread misinformation and commit fraud during sensitive events like the Air India Flight 171 crash,' said Amit Relan , co-founder and CEO at digital fraud detection firm mFilterIt. Live Events In addition to fake news about the crash, Relan's firm found instances of fake videos showing the aftermath of the air accident and even a case of a fraudulent fundraising attempt. 'This is a classic case of emotionally-driven financial fraud, often operated from untraceable or unverified sources,' Relan said, advocating for public education to differentiate legitimate from manipulated content and collaboration on threat intelligence between platforms, law enforcement, and tech enablers. As per the International Civil Aviation Organization (ICAO) module, it is essential to communicate effectively with the media to ensure the accuracy of information provided and maintain public trust in the aviation industry and accident investigation authorities. 'A well-planned and executed communication strategy can go a long way in minimising negative publicity and ensuring that the facts are reported in a timely and accurate manner,' it says. India's civil aviation ministry last week said the Aircraft Accident Investigation Bureau (AAIB) has successfully extracted data from the cockpit voice recorder (CVR) and flight data recorder (FDR) of the aircraft. Both the CVR and the FDR were moved to AAIB's lab in New Delhi only on 24 June, more than a week after they were recovered from the crash site in Ahmedabad. The ministry did not give any reason for the delay. John Cox, a former airline pilot and chief executive of Safety Operating Systems, a provider of consulting services on accident investigations, said with the growing prevalence of GenAI, there needs to be a paradigm shift for providing information by India's AAIB after an event like a crash. 'This is the most extensive case of misinformation that has been seen during any accident. The AAIB should be having daily briefings as done by agencies across the globe. Because in the absence of information, it is misinformation which fills the void,' Cox said. BOOM, a fact-checking platform found images of aircraft with its tail on blaze or even an AI generated image of a wreckage in front of the Ahmedabad airport, which was also created by AI. BOOM used AI image detectors, with all indicating a high likelihood of them being AI-generated. However, none of the posts contained a disclaimer indicating the synthetic nature of the images. Mishi Choudhary, an online civil rights activist and founder of Software Freedom Law Centre suggests multi-pronged solutions including educating users and using better tools. 'Each new disaster now presents new opportunities for disinformation peddlers to muddy the waters and use heightened emotional vulnerability to exploit. This is not a problem that can be solved by enacting new laws,' she said. 'Platforms need to take responsibility by investing more to tackle misinformation in different languages, and be better prepared around such events."