
British explorer who vanished in Antarctica in 1959 found in glacier 6 decades later
His remains were found next to a radio, a wristwatch and a pipe by a Polish Antarctic expedition in January. The discovery came as a shock to his family, including his brother, who had given up on ever finding his brother.
"I had long given up on finding my brother. It is just remarkable, astonishing. I can't get over it," David Bell, 86, told BBC News.
Professor Dame Jane Francis, director of the British Antarctic Survey, praised Bell as a brave member of the Antarctic exploration team who contributed to early science and the legacy of polar research.
His brother recounted the moment they received the horrific news over six decades ago.
"The telegram boy said, 'I'm sorry to tell you, but this is bad news'," he said. He went upstairs to tell his parents. "It was a horrendous moment," he added.
Who was Dennis Bell?
Dennis Bell, nicknamed "Tink," was born in 1934. He worked with the Royal Air Force while training as a meteorologist. In 1958, he joined the Falkland Islands Dependencies Survey to work in Antarctica.
On a two-year assignment at Admiralty Bay, with 12 men on King George Island, his job was to send up meteorological weather balloons and radio the reports to the UK every three hours.
He died on a surveying trip when he fell through a crevasse. Attempts were made to rescue him. His partner, Jeff Stokes, dropped a rope to let him grab and be pulled out, but once he got to the top, it snapped, and he fell again. When Stokes called to him, he did not respond.
Found 65 years later
In January this year, a team of Polish researchers stumbled across bones on loose ice and rocks. Others were found on the glacier surface. As snow fell, they put a GPS marker to return later and bring their "fellow polar colleague" home. A team of scientists made four trips to collect the remains.
Bell's brother will travel to England along with his sister to bury the remains. "I'm just sad my parents never got to see this day," he said "It's wonderful; I'm going to meet my brother. You might say we shouldn't be thrilled, but we are. He's been found - he's come home now."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
8 hours ago
- NDTV
'Godfather Of AI' Reveals Bold Strategy To Save Humanity From AI Domination
Geoffrey Hinton, the British-Canadian computer scientist known as the "Godfather of AI", has expressed concerns that the technology he helped develop could potentially wipe out humanity. According to Mr Hinton, there's a 10-20% chance of this catastrophic outcome. Moreover, he's sceptical about the approach tech companies are taking to mitigate this risk, particularly in ensuring humans remain in control of AI systems. "That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that," Mr Hinton said at Ai4, an industry conference in Las Vegas, as per CNN. The scientist also warned that future AI systems could manipulate humans with ease, likening it to an adult bribing a child with candy. His warning comes after recent examples have shown AI systems deceiving, cheating, and stealing to achieve their goals, such as an AI model attempting to blackmail an engineer after discovering personal information in an email. Instead of trying to dominate AI, Mr Hinton suggested instilling "maternal instincts" in AI models, allowing them to genuinely care about people, even as they surpass human intelligence. "AI systems will very quickly develop two subgoals, if they're smart: One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive," Mr Hinton said. He believes fostering a sense of compassion in AI is of paramount importance. At the conference, he pointed to the mother-child relationship as a model, where a mother's instincts and social pressure drive her to care for her baby, despite the baby's limited intelligence and control over her. While he expressed uncertainty about the technical specifics, he stressed that researchers must work on this challenge. "That's the only good outcome. If it's not going to parent me, it's going to replace me. These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die," he added. Geoffrey Hinton is renowned for his groundbreaking work on neural networks, which laid the foundation for the current AI revolution. In May 2023, he quit his job at Google so he could freely speak out about the risks of AI.


Time of India
10 hours ago
- Time of India
New study warns! Routine AI use may affect doctors' tumor diagnostic skills by 20%
Artificial Intelligence (AI) is transforming healthcare, offering tools that improve early disease detection, diagnostic accuracy, and treatment planning. AI-assisted systems can help clinicians identify conditions such as pre-cancerous growths or subtle anomalies that might be missed otherwise, boosting patient outcomes and overall clinical efficiency. However, recent research highlights a potential downside: excessive reliance on AI may lead to skill erosion among healthcare professionals. Even experienced doctors who regularly use AI tools may find their independent decision-making and observational abilities decline over time. AI has the potential to transform medicine, improve patient outcomes, and streamline healthcare workflows. However, the recent study serves as a reminder that over-reliance on technology can erode critical skills, even among experienced doctors. A balanced approach—where AI supports rather than replaces human judgment—is essential for sustaining both high-quality patient care and professional expertise. Doctors' ability to spot pre-cancerous growths declines when AI is removed A recent study in The Lancet Gastroenterology and Hepatology , investigated how AI impacts colonoscopy performance. The researchers found that AI assistance enabled doctors to detect pre-cancerous growths in the colon more effectively, potentially preventing progression to colorectal cancer. However, a concerning trend emerged: when AI support was removed, doctors' ability to detect tumors dropped by approximately 20%, even compared with rates recorded before AI introduction. This suggests that clinicians may unconsciously rely on AI cues, reducing their independent observational skills. Global adoption of AI in healthcare systems Healthcare systems worldwide are increasingly investing in AI technology. AI is being positioned as a tool to boost diagnostic accuracy, improve workflow efficiency, and enhance patient safety. For instance, in 2025, the British government announced £11 million (S$19 million) in funding for a trial exploring how AI can facilitate earlier breast cancer detection. While AI offers promising benefits, these developments highlight the importance of maintaining human expertise alongside technological advancements. Doctors losing skills due to AI dependence: Here's what the study found The study revealed that AI can inadvertently encourage over-reliance, even among highly skilled clinicians. According to the research paper, AI support can result in doctors becoming less motivated, less focused, and less responsible when making decisions independently. The researchers studied four endoscopy centers in Poland, comparing colonoscopy detection rates three months before and after AI implementation. The procedures were randomized: some were performed with AI guidance, others without. The difference in detection rates highlighted the risk of skill erosion caused by dependency on AI tools. AI can reduce tumor detection skills in expert doctors Interestingly, the study participants were 19 highly experienced doctors, each with over 2,000 colonoscopies completed. Despite their expertise, AI still affected their ability to detect tumors without assistance. Professor Yuichi Mori, one of the study authors from the University of Oslo, warned that skill degradation could worsen as AI becomes more sophisticated. Dr. Omer Ahmad, a gastroenterologist at University College Hospital London, added: 'Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy.' He noted that the effect could be more pronounced for trainees or novice doctors, who might become overly dependent on AI before mastering essential diagnostic skills. Excessive AI use may reduce brain engagement and focus; study reveals Concerns about over-reliance on AI extend beyond healthcare. A 2025 MIT study found that students using OpenAI's ChatGPT to write essays demonstrated less cognitive engagement and brain activity, illustrating how AI can unintentionally reduce critical thinking skills. These findings underscore the need for balanced AI integration—leveraging AI's advantages while ensuring that human expertise and cognitive abilities remain intact. Strategies to maintain skills while using AI To mitigate skill erosion, healthcare professionals and institutions can adopt several strategies: AI as a support tool, not a replacement – Use AI to assist decisions while maintaining independent judgment. Regular performance monitoring – Track detection accuracy with and without AI to identify skill gaps. Structured training rotations – Alternate between AI-assisted and manual procedures to preserve proficiency. Active critical thinking – Encourage clinicians to question and evaluate AI suggestions rather than blindly following them. Continuous education – Reinforce fundamental diagnostic skills through workshops and refresher courses. By implementing these measures, healthcare systems can maximise AI's benefits while safeguarding essential clinical competencies. The information presented in this article is for educational and informational purposes only and is not intended as medical advice. Also Read | 6 oral symptoms warning you about underlying health risks from diabetes to heart diseases


Indian Express
10 hours ago
- Indian Express
Galaxies flying away from us: How Hubble's redshift led us to the Big Bang
On a crisp night in the late 1920s, Edwin Hubble stood in the dome of the 100-inch Hooker telescope at Mount Wilson Observatory, high above the smog and streetlamps of Los Angeles. Through that giant eye, he measured the light from distant 'spiral nebulae' — what we now call galaxies — and found something remarkable. Their light was shifted toward the red end of the spectrum, a sign that they were racing away from us. It was as if the universe itself were stretching. When light from a moving source is stretched to longer wavelengths, we call it redshift — much like the way a passing train's whistle drops in pitch as it moves away. Hubble discovered that the farther a galaxy was, the greater its redshift — meaning the faster it was receding. This became the Hubble–Lemaître law, a simple but revolutionary equation showing that the universe is expanding. But here's the subtlety: the galaxies are not flying through space as bullets through the air. Instead, the space between them is stretching. A common analogy is raisin bread dough rising in the oven — as the dough expands, every raisin moves away from every other raisin, and the farther apart two raisins start, the faster they separate. Crucially, the bread isn't expanding into the kitchen; the dough itself is the 'space.' In the same way, the universe isn't expanding into some empty void — it's the distance scale itself that's growing. This is why galaxies farther away show greater redshift: they're not just distant in space, they're distant in time, and the intervening space has been stretching for billions of years. The implication was staggering: if the galaxies are all moving apart today, then in the distant past, they must have been much closer together. Follow this logic far enough back and you arrive at a moment when all the matter, energy, space, and time we know were compressed into a single, unimaginably dense point. The first to put this into words was Georges Lemaître, a Belgian priest and physicist. In 1931, he proposed that the universe began from a 'primeval atom' — an idea that would later be nicknamed the Big Bang. At the time, the name was meant to be dismissive; British astronomer Fred Hoyle, along with his student and celebrated Indian astrophysicist Jayant Narlikar, champions of the rival Steady State theory, coined it in a radio broadcast to mock the idea of a cosmic explosion. Ironically, the label stuck and became the most famous phrase in cosmology. For decades, the debate raged: was the universe eternal and unchanging, or did it have a beginning? The tie was broken not in an ivory tower, but in a New Jersey field. In 1964, Arno Penzias and Robert Wilson, two engineers at Bell Labs, were testing a radio antenna for satellite communications when they picked up a persistent hiss of microwave noise. They cleaned the antenna, even shooed away nesting pigeons — but the signal stayed. Unbeknownst to them, just 50 km away, Princeton physicist Robert Dicke and his team were preparing to search for the faint afterglow of the Big Bang. When the groups connected, the truth emerged: Penzias and Wilson had stumbled upon the cosmic microwave background (CMB), the fossil light from the universe's infancy, released about 380,000 years after the Big Bang. The CMB confirmed that the universe had indeed begun in a hot, dense state and has been cooling and expanding ever since. In the first fraction of a second, an incredible burst of inflation stretched space faster than the speed of light. This expansion wasn't into anything — rather, the very fabric of space itself was stretching, carrying galaxies along with it. As space grows, so does the distance scale we use to measure it: a galaxy whose light left billions of years ago was much closer then than it is today. That's why the farther away we look, the greater the redshift we see — we are peering not just across space, but back in time, to when the universe was smaller. The CMB is the afterglow from a time about 380,000 years after the Big Bang, when the universe had cooled enough for electrons and protons to form neutral atoms, letting light travel freely for the first time. That light has been on a 13.8-billion-year journey to us, its wavelength stretched by cosmic expansion from the fierce glare of the early universe into the faint microwave glow we detect today. Over the next minutes after the Big Bang, nuclear fusion forged the first elements: hydrogen, helium, and traces of lithium. Hundreds of millions of years later, the first stars and galaxies ignited, manufacturing heavier elements in their cores and seeding the cosmos with the building blocks of planets and life. Billions of years on, our Sun and Earth formed from recycled stardust, and here we are — creatures of carbon, contemplating the birth of time. The Big Bang theory is not just an origin story; it's a framework that explains everything from the cosmic web of galaxies to the faintest ripples in the CMB. It predicts the abundance of light elements, the distribution of galaxies, and the universe's large-scale geometry. Without it, we'd have no coherent picture of cosmic history. Today, the expansion first seen by Hubble is still ongoing — in fact, it's accelerating, driven by the mysterious dark energy. The latest measurements from telescopes like Hubble's successor, the James Webb Space Telescope, and surveys like the Sloan Digital Sky Survey continue to refine our understanding of the early universe, probing the first galaxies that emerged from cosmic darkness. The journey from a lone astronomer squinting at galaxies to a global scientific collaboration mapping the cosmos is a reminder that big ideas often start with small clues. As Carl Sagan once put it, 'We are a way for the cosmos to know itself.' The Big Bang is not just about how the universe began — it's about how we began, too. Shravan Hanasoge is an astrophysicist at the Tata Institute of Fundamental Research.