&w=3840&q=100)
Air India black box damaged: Is this a setback to the probe? What happens now?
The black box of the crashed Air India AI-171 flight has sustained damage as the plane hit a residential area in Ahmedabad, moments after take-off, last week. Reports say the recorder is likely to be sent to the US for analysing data. What information does the black box hold? read more
India is likely to send the black box recovered from the crashed Air India AI-171 flight to the United States for data extraction. The recorder has reportedly sustained damage after the London-bound Boeing 787-8 Dreamliner came down on a residential area in Ahmedabad, moments after take-off, last week.
A video of the crash showed the Air India flight struggling to stay airborne before rapidly descending and disappearing behind trees and buildings. This was followed by an explosion in the sky. The plane had crashed into a doctors' accommodation building at the BJ Medical College and Civil Hospital, killing 241 of 242 people on board and 33 on the ground.
STORY CONTINUES BELOW THIS AD
But what is a black box and why is it necessary in the Air India crash investigation? We will explain.
What are black boxes?
The cockpit voice recorder (CVR) and the flight data recorder (FDR) make up the black boxes on an aircraft.
Also known as accident data recorders, flight data recorders store information on several parameters, including altitude, flight speed, flight control, engine performance, fuel, turbulence, wind speed, roll, and autopilot status.
A cockpit voice recorder records radio transmissions and other audio in the cockpit, such as conversations between pilots, engine sounds, background noises, landing gear extension and retraction, and instrumentation warnings.
The black boxes are usually bright orange in colour, so they can be easily found in the debris of the plane.
The flight data recorder from TWA flight 800 is displayed at NTSB headquarters July 25, 1996. File Photo/Reuters
How important are black boxes?
All commercial flights must have black boxes. The two recorders help investigators piece together events leading to an aircraft mishap.
CVRs pick up audio from the crew's microphones, pilots' headsets, and near the centre of the cockpit. They can record two hours of audio data, while 25 hours of flight data remain available on FDRs.
Black boxes are highly protective equipment, weighing about 4.5 kg, that record information about a flight and help to find out what led to its crash.
'With the data retrieved from the FDR, the Safety Board can generate a computer animated video reconstruction of the flight. The investigator can then visualise the airplane's attitude, instrument readings, power settings and other characteristics of the flight. This animation enables the investigating team to visualise the last moments of the flight before the accident,' US National Transportation Safety Board (NTSB) said on its website.
STORY CONTINUES BELOW THIS AD
'Both the Flight Data Recorder and the Cockpit Voice Recorder have proven to be valuable tools in the accident investigation process. They can provide information that may be difficult or impossible to obtain by other means. When used in conjunction with other information gained in the investigation, the recorders are playing an ever increasing role in determining the probable cause of an aircraft accident,' it added.
Can black boxes be destroyed during plane crash?
Black boxes are usually installed at the tail end of the aircraft, as it increases their chances of survival during a mishap.
The FDR and CVR are kept in one box made of titanium or stainless steel and wrapped with fire and heat-resistant insulation. The black box is designed to withstand an acceleration of 3,400 Gs (3,400 times the force of gravity) and a temperature of up to 1093 degrees Celsius for an hour.
FDRs can survive depths of over 6,000 metres underwater. When submerged in water, the beacon present in these devices can transmit ultrasound signals for 30 days. This underwater locator beacon (ULB), which has a battery life of over six years, transmits sound as deep as 14,000 feet and can be tracked by sonar and audio equipment.
STORY CONTINUES BELOW THIS AD
These beacons will not send out ultrasonic pings if the plane crashed on land; thus, the investigators will have to scour the accident site for the black box.
Data from the black boxes is stored on 'stacked memory boards inside the crash-survivable memory unit (CSMU)', as per How Stuff Works. These CSMUs can hold out against extreme heat, crashes and tonnes of pressure.
'It is extremely rare for a black box to be destroyed. Black boxes have traditionally outperformed their design,' Scott Hamilton, director of Leeham Co., an aviation consulting company, told NPR.
'It would take a concentrated fire beyond its design strength, or an impact so high that it would be beyond what it could withstand.'
However, black boxes have not always helped investigators. It came to light in January that the flight data and cockpit voice recorders on the Jeju Air plane stopped recording nearly four minutes before the airliner hit a concrete structure at the end of a runway at South Korea's Muan International Airport on December 29 last year. The disaster, the worst in South Korea, had killed 179 people.
STORY CONTINUES BELOW THIS AD
Questions have arisen about the eight black boxes on the four hijacked flights that crashed during the 9/11 terrorist attacks on the US. As per a 2001 ABC News report, flight data recorders were recovered from Flight 93, which crashed in Pennsylvania. The voice transcript extracted from them is available online, Forbes reported.
As per ABC News, the black boxes were also recovered from the collapsed portion of the Pentagon building, where a jet slammed into during the terrorist attacks.
Will India send black boxes to the US?
India could send the black box of the crashed Air India flight to the US for analysis. The recorder has sustained damage due to the fire as the plane crashed in Ahmedabad, making it difficult to extract the data in India, Economic Times (ET) reported, citing sources.
As per Ministry of Civil Aviation sources, the Centre will take a final call on whether to send the black box to the US or not. The black box may be sent to the National Transportation Safety Board laboratory in Washington, DC for examination.
In the event it is sent to the US, a contingent of Indian officials will also go with the black box.
Sources previously told Indian Express that digital flight data recorder (DFDR) and CVR will be analysed in India or sent abroad, depending on their physical condition and the extent of data analysis needed for the investigation.
STORY CONTINUES BELOW THIS AD
'While AAIB had established a laboratory at its headquarters in Delhi last year, it is yet to be properly equipped to extract data from recorders which have sustained heavy damage. The NTSB team will carry them to their lab under protection and supervision from Indian officials to ensure that proper protocols are followed,' a person in the know told ET.
The source said it could take two days to months to extract data from the black box, depending on the damage.
'Since the recorder has been damaged, the chip will need to be extracted by removing the memory board so that there is no further damage to data. The electronic circuit will also have to be assessed for damage,' the person said.
With inputs from agencies

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
18 minutes ago
- India Today
Is ChatGPT making us dumb? MIT study says students are using their brains less
ChatGPT is making students dumb! Or rather, making them use their brains less. A new study by MIT's Media Lab around the impact on human cognition, particularly among students, found that using generative AI tools like ChatGPT for academic work and learning could actually lower people's critical thinking and cognitive engagement over this study researchers observed 54 participants aged 18 to 39 from the Boston area, and divided them into three groups. Each group of students was then asked to write SAT-style essays using either OpenAI's ChatGPT, Google Search, or no digital assistance at all. During this process, researchers monitored brain activity among users through electroencephalography (EEG), scanning 32 different brain regions to evaluate cognitive engagement during the findings were concerning. The group of students using ChatGPT showed the lowest levels of brain activity. According to the study, these students 'consistently underperformed at neural, linguistic, and behavioural levels.' In fact, the study found that over the course of several essays, many ChatGPT users became increasingly passive, often resorting to just copying and pasting text from the AI chatbot's responses rather than refining or reflecting on the content in line with their own thoughts. Meanwhile, the students who worked without any digital tools showed the highest brain activity, particularly in regions associated with creativity, memory, and semantic processing. 'The task was executed, and you could say that it was efficient and convenient,' Nataliya Kosmyna, one of the authors of the research paper. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.'Long term impact suspectedadvertisementResearchers concluded that while AI can help students' quick productivity, it can also impact long-term learning and brain development. Meanwhile, the essay-writing group that used no tools reported higher levels of satisfaction and ownership over their work. In this group, the EEG readings also showed greater neural connectivity in the alpha, theta, and delta frequency bands, areas that are often linked to deep thinking and creative the group using Google Search showed relatively high levels of brain engagement, suggesting that traditional internet browsing still stimulates active thought processes. The difference further shows how AI users tend to rely entirely on chatbot responses for information instead of thinking critically or using search further understand and measure retention and comprehension, researchers also asked the students to rewrite one of their essays. And this time the tools were swapped. Students who earlier used ChatGPT were now asked to write without assistance, and the group which used their brain were asked to use AI. The results of this swapping further reinforced the earlier findings. The users who had relied on ChatGPT struggled to recall their original essays and showed weak cognitive re-engagement. Meanwhile, the group that had initially written without the online tools showed increased neural activity when using ChatGPT. This finding further confirms that AI tools can be helpful in learning, but only when used after humans complete the foundational thinking themselves.


Time of India
18 minutes ago
- Time of India
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind
It's quick, it's clever, and it answers almost everything—no wonder millions around the world rely on ChatGPT. But could this digital genie be dulling our minds with every wish we make? According to a startling new study by scientists at MIT's Media Lab, the answer may be yes. Researchers have now found that excessive use of AI tools like ChatGPT could be quietly eroding your memory, critical thinking, and even your brain activity. Published on arXiv, the study titled 'The Cognitive Cost of Using LLMs' explores how language models—especially ChatGPT—affect the brain's ability to think, learn, and retain information. Brain vs Bot: How the Study Was Done To examine what they call the 'cognitive cost' of using large language models (LLMs), MIT researchers tracked 54 students over a four-month period using electroencephalography (EEG) devices to monitor brain activity. The participants were divided into three groups: one used ChatGPT, another relied on Google, and the last used no external help at all—dubbed the 'Brain-only' group. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World by Taboola by Taboola While the AI-powered group initially showed faster results, the long-term findings were more sobering. Students who depended on ChatGPT for essay writing exhibited poorer memory retention, reduced brain engagement, and lower scoring compared to their peers. As the researchers noted, 'The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring.' — rohanpaul_ai (@rohanpaul_ai) Google Wasn't Great, But Still Better Than ChatGPT Interestingly, students who used Google showed moderate brain activity and generated more thoughtful content than those who leaned on ChatGPT. Meanwhile, those in the Brain-only group had the highest levels of cognitive engagement, producing original ideas and deeper insights. In fact, even when ChatGPT users later attempted to write without assistance, their brain activity remained subdued—unlike the other groups who showed increased engagement while adapting to new tools. You Might Also Like: Narayana Murthy says AI was five times faster at what took him hours; shares how techies should use it This suggests that habitual ChatGPT usage might not just affect how we think, but whether we think at all. A Shortcut with a Hidden Toll The study also points to how this over-reliance on AI encourages mental passivity. While ChatGPT users reported reduced friction in accessing information, this convenience came at a cost. As the researchers explained, 'This convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.' The team also raised red flags about algorithmic bias : what appears as top-ranked content from an AI is often a result of shareholder-driven training data, not necessarily truth or value. This creates a more sophisticated version of the 'echo chamber,' where your thoughts are subtly shaped—not by your own reasoning, but by an AI's probabilistic guesses. What This Means for the AI Generation As AI tools become more embedded in our everyday tasks—from writing emails to crafting essays—this study is a wake-up call for students, educators, and professionals. While tools like ChatGPT are powerful assistants, they should not become cognitive crutches. You Might Also Like: 'Neuralink babies'? Scale AI's Alexandr Wang says he is waiting for Elon Musk's brain chips before having kids The researchers caution that as language models continue to evolve, users must remain alert to their potential mental side effects. In a world where convenience is king, critical thinking might just be the first casualty.


Mint
an hour ago
- Mint
Is ChatGPT boosting brainpower or making us lazy? MIT's groundbreaking brain scan study reveals complex reality
MIT scientists recently set out to answer a fascinating question: what really happens in our brains when we use ChatGPT to learn? Their new study, published on arXiv, the first to use brain scans to track ChatGPT-assisted learning, reveals that the answer isn't as simple as 'AI is good' or 'AI is bad.' It all comes down to how we use it. The research found that ChatGPT can be a powerful tool, or a shortcut that leaves us thinking less, depending on our approach. Higher-competence learners used ChatGPT to revisit, rephrase, and connect information, actively building their understanding. Their brain scans showed deep engagement and less wasted mental effort. In contrast, lower-competence learners often relied on ChatGPT's quick answers and skipped the hard work of digesting or connecting ideas. Their brains showed less of the activity linked to meaningful learning. Not exactly. The study doesn't say that using ChatGPT automatically makes you less intelligent. Instead, it shows that if you use AI passively - just accepting whatever it says - you might miss out on the mental workout that helps you truly understand and remember things. But if you use it to challenge yourself, check your understanding, and make connections, you can actually learn more efficiently. Again, it depends on the user. The researchers found that ChatGPT isn't inherently harmful or helpful. It's a tool, and its impact on your brain (and your learning) depends on whether you use it to think deeply or just to get quick answers. The potential for meaningful learning is still there, but it requires curiosity and effort from the user. ChatGPT can be a real boost for learning - or a bit of a shortcut - depending on how you use it. If you treat it like a study buddy and really dig into the answers, you'll understand things better and save yourself some mental effort. But if you just grab quick responses without thinking them through, you might miss out on the kind of learning that really sticks. It's not that ChatGPT is making us less smart, but it does make it tempting to skip the hard parts. The trick is to use it to help you think things through, not to do all the thinking for you. So next time you're tempted to let ChatGPT do all the work, remember: your brain still needs a workout. Use AI to help you think, not to avoid thinking altogether.