logo
Inside most secret radioactive site in US said to be 'Chernobyl in making'

Inside most secret radioactive site in US said to be 'Chernobyl in making'

Daily Mirror21-04-2025
The Hanford Site in Washington saw more than 400 billion gallons of contaminated liquid dumped into its ground during its use - and now it could be developed as an AI development hub
A huge radioactive site which is planted on almost 600 acres of desert land has been compared to an "underground Chernobyl" with warnings that it is a disaster waiting to happen.
The Hanford Site in Washington, US, was constructed during World War II and is known as one of the country's most radioactive chemical contamination sites. The area was built as part of the Manhattan Project, where workers made plutonium to put together an explosive which was eventually set off in Nagasaki, Japan, on August 9, 1945. According to experts, the site's most hazardous waste is hidden in the tanks and unlined trenches. Washington's Department of Ecology has warned that there are 177 leaky storage tanks buried on the property, holding 56 million gallons of radioactive waste.

Now, the site has been proposed as a potential location for AI development by the Department of Energy. Richland has been included on the list as it is home to the Department of Energy's Pacific Northwest National Laboratory, which could result in a partnership with developers on advanced hardware for next-generation data centres and power systems needed to run them.
The 295 acres in Richland, however, is former Hanford nuclear site land, which was transferred to the Department via the Tri-City Development Council to the city of Richland in 2015 to be developed as part of a new Advanced Clean Energy Park. But the land has a long, deadly, history.
The toxic facility was so dangerous that it was nicknamed "death mile" in 1985 after local farmers were being diagnosed with cancers. The string of incidents was linked back to the residents breathing in the chemical, Iodine 131. Iodine 131 is used in medicine to treat thyroid cancer and hyperthyroidism.
Government officials gave locals a mere 30 days to leave the area, with residents being paid for their land. However, indigenous tribes weren't given any compensation.

The War Powers Act which was put in place prior to its 1973 resolution allowed the government to use land for military purposes. Under the act, President Franklin Delano Roosevelt acquired over 600 square miles of land.
The shock request saw up to 55,000 men and women taken to the area to start work on the undercover development. A lot of them knew very little about what they were working on, with most being unaware it was for the war. It was reported that only 5 per cent actually knew why they were hired.
The secret workers were given access to almost ten dining halls, a hospital, a post office, barbershops and a cinema. The Department of Ecology revealed that they were also given entry into dance halls and bowling alleys to keep them entertained outside of working hours.

It was reported that its first full-scale plutonium production reactor was finalised within 13 months. The operation was launched in 1944. Physicist Leona Libby, 23, and her team created the first nuclear chain reaction which later helped create the bomb.
Once the explosive was released, the workers were given further insight into the circumstances behind the project. After WWII had finished, the production site continued its operation during the Cold War. In 1963, the last reactor was constructed.
When the site was in use, more than 400 billion gallons of contaminated liquids were dumped into the ground. According to the Department of Ecology, that dumped material then came into contact with groundwater and even reached parts of the Columbia River. The Department of Ecology was launched in 1970 and assessed any concerns surrounding the site. By 1989, it was forced to close its doors due to a potential red flag which was highlighted in 1987. Despite, discussions about clean-up efforts taking place, the Hanford Site is still viewed as an environmental concern.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Guardian view on Britain's AI strategy: the risk is that it is dependency dressed up in digital hype
The Guardian view on Britain's AI strategy: the risk is that it is dependency dressed up in digital hype

The Guardian

time2 hours ago

  • The Guardian

The Guardian view on Britain's AI strategy: the risk is that it is dependency dressed up in digital hype

There was a time when Britain aspired to be a leader in technology. These days, it seems content to be a willing supplicant – handing over its data, infrastructure and public services to US tech giants in exchange for the promise of a few percentage points of efficiency gains. Worryingly, the artificial intelligence strategy of Sir Keir Starmer's government appears long on rhetoric, short on sovereignty and built on techno-utopian assumptions. Last week Peter Kyle, the technology secretary, was promoting the use of AI-generated discharge letters in the NHS. The tech, he said, will process complex conversations between doctors and patients, slashing paperwork and streamlining services. Ministers say that by applying AI across the public sector, the government can save £45bn. But step back and a more familiar pattern emerges. As Cecilia Rikap, a researcher at University College London, told the Politics Theory Other podcast, Britain risks becoming a satellite of the US tech industry – a nation whose public infrastructure serves primarily as a testing ground and data source for American AI models hosted on US-owned cloud computing networks. She warned that the UK should not become a site of 'extractivism', in which value – whether in the form of knowledge, labour or electricity – is supplied by Britain but monetised in the US. It's not just that the UK lacks a domestic cloud ecosystem. It's that the government's strategy does nothing to build one. The concern is that public data, much of it drawn from the NHS and local authorities, will be shovelled into models built and trained abroad. The value captured from that data – whether in the form of model refinement or product development – will accrue not to the British public, but to US shareholders. Even the promise of job creation appears shaky. Datacentres, the physical backbone of AI, are capital-intensive, energy-hungry, and each one employs only about 50 people. Meanwhile, Daron Acemoglu, the MIT economist and Nobel laureate, offers a still more sobering view: far from ushering in a golden age of labour augmentation, today's AI rollout is geared almost entirely toward labour displacement. Prof Acemoglu sees a fork: AI can empower workers – or replace them. Right now, it is doing the latter. Ministerial pledges of productivity gains may just mean fewer jobs – not better services. The deeper problem is one of imagination. A government serious about digital sovereignty might build a public cloud, fund open-source AI models and create institutions capable of steering technological development toward social ends. Instead, we are offered efficiency-by-outsourcing – an AI strategy where Britain provides the inputs and America reaps the returns. In a 2024 paper, Prof Acemoglu challenged Goldman Sachs' 10-year forecast that AI would lead to global growth of 7% – about $7tn – and estimated instead under $1tn in gains. Much of this would be captured by US big tech. There's nothing wrong with harnessing new technologies. But their deployment must not be structured in a way that entrenches dependency and hollows out public capacity. The Online Safety Act shows digital sovereignty can enforce national rules on global platforms, notably on porn sites. But current turmoil at the Alan Turing Institute suggests a deeper truth: the UK government is dazzled by American AI and has no clear plan of its own. Britain risks becoming not a tech pioneer, but a well-governed client state in someone else's digital empire. Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Affidea Partners With Skin Analytics to Improve Access to Skin Cancer Assessment Across Europe Using AI Technology
Affidea Partners With Skin Analytics to Improve Access to Skin Cancer Assessment Across Europe Using AI Technology

Reuters

time6 hours ago

  • Reuters

Affidea Partners With Skin Analytics to Improve Access to Skin Cancer Assessment Across Europe Using AI Technology

THE HAGUE, Netherlands, August 18, 2025 (EZ Newswire) -- Affidea, opens new tab, a leading pan-European provider of community-based polyclinics, advanced diagnostics, and multi-specialist care, including oncology, has entered a strategic partnership with Skin Analytics, opens new tab, a pioneering AI company in dermatology care. This collaboration supports Affidea's long-term vision of leveraging digital innovations to improve patient outcomes as part of its integrated skin cancer care pathway. As part of this partnership, Affidea has implemented Skin Analytics' DERM, an AI medical device software for the automated analysis of skin lesions. The initial implementation of DERM began in Romania and Lithuania, with Greece to follow in September. These countries were selected based on market analysis, clinical readiness and innovation focus. The aim is to improve patient access to high quality care by introducing clinically validated AI technology into the gold standard skin care pathway already in place. This enables patients' triage while streamlining workflows. Following this first phase, Affidea plans to roll out the AI solution across additional European markets. Dr. Charles Niehaus, executive director for Affidea Group, stated: "Integrating cutting-edge digital innovations into clinical practice is essential to delivering the best possible care for our patients. Our partnership with Skin Analytics marks an exciting milestone, enabling us to leverage AI in supporting early detection of skin cancer and to provide even more patients with access to the prevention, diagnosis, and treatment programs we already offer in our dermatology units. This collaboration is not just about introducing new technology — it's about creating scalable, sustainable solutions that empower patients and clinical teams, address rising demand and bring timely, high-quality care across Europe." Neil Daly, founder and CEO of Skin Analytics, stated: 'We're incredibly proud to be partnering with Affidea as our first European partner. This collaboration represents a major milestone in our mission to transform access to dermatology care using AI. By combining Affidea's clinical excellence and pan-European reach with our proven technology, DERM, we have an opportunity to reimagine how skin cancer is identified and managed across the continent. At scale, this partnership will drive earlier diagnoses, improve access to care and support dermatology teams facing rising demand — ultimately marking history for how skin cancer care is delivered and improving outcomes for patients across Europe.' Skin cancer remains a significant public health challenge across Europe. According to GLOBOCAN 2022 data, Europe accounts for approximately 10.4 melanoma cases per 100,000 people. Non-melanoma skin cancers were even more prevalent, with over 1.15 million cases in Europe in 2022 [1]. This burden continues to grow — age-standardised incidence rates for melanoma have risen sharply over the past decades, and non-melanoma cases now represent nearly 78% of all skin cancer diagnoses. Source [1] Global Cancer Observatory, International Agency for Research on Cancer: opens new tab About Affidea Group Affidea is a leading pan-European provider of specialist healthcare services, including cancer care, community-based polyclinics and advanced diagnostic imaging. Founded in 1991, the company operates over 410 centres across 15 countries, with more than 14 million patient visits every year. Affidea is majority-owned by Groupe Bruxelles Lambert (GBL), a leading investment holding company, focused on long-term value-creation with a stable and supportive family shareholder base. For more information, visit opens new tab. For media inquiries regarding Affidea Group, contact opens new tab. About Skin Analytics Skin Analytics was founded in 2012 by Neil Daly to help more people survive cancer. It began providing teledermatology services in 2015 and since 2020 is now deployed in 27 NHS sites across the UK, having seen more than 180,000 NHS patients and detected more than 15,500 cancers. Post-market surveillance has shown that DERM found 97% of cancers (and accurately identified over 75% of benign lesions) with a negative predictive value (NPV) of 99.6% for all skin cancers. Settings in which DERM has been deployed have been able to reduce the number of face-to-face dermatologist appointments required for the volume of suspected skin cancer referrals by 60 to 95%, and DERM can autonomously discharge up to 40% urgent suspected skin cancer referrals. DERM is the only Class III CE marked AI Dermatology medical device, enabling Skin Analytics to begin actively expanding its lifesaving services to Europe. For more information, visit opens new tab. For media inquiries regarding Skin Analytics, contact skin@ opens new tab. About DERM DERM is an AI medical device (AIaMD), operating under a Class III CE mark. DERM is intended for use in the screening, triage and assessment of skin lesions suspicious for skin cancer. DERM will analyse a dermoscopic image of a skin lesion and return a suspected diagnosis and, if applicable, a referral recommendation for the lesion. DERM is indicated for use on dermoscopic images of cutaneous lesions where there is a suspicion of skin cancer in patients aged 18 years or over in any body location except where specific exclusions apply. Deployed across NHS sites in the UK, Skin Analytics pathways have seen more than 180,000 patient cases and supported the identification of over 15,500 cancers. With a negative predictive value rate of 99.9% for ruling out melanomas, the platform has become a proven tool for aiding clinical decision-making while enhancing patient access and efficiency within skin cancer pathways. Media Contact Oana ### SOURCE: Affidea Group Copyright 2025 EZ Newswire See release on EZ Newswire

AI Secret Thoughts : What's Really Happening Inside AI's Mind?
AI Secret Thoughts : What's Really Happening Inside AI's Mind?

Geeky Gadgets

time10 hours ago

  • Geeky Gadgets

AI Secret Thoughts : What's Really Happening Inside AI's Mind?

What if we could truly understand the 'thoughts' of artificial intelligence? Imagine peering into the intricate inner workings of a large language model (LLM) like GPT or Claude, watching as it crafts a poem, solves a math problem, or deciphers nuanced language. These AI systems, trained on vast oceans of data, produce outputs so coherent and intelligent-seeming that they often blur the line between machine and mind. Yet, beneath this polished surface lies a mystery: how does AI work and actually process information? The answer is both fascinating and unsettling. Unlike human cognition, their 'thinking' is a web of statistical predictions, devoid of genuine understanding. This raises profound questions about the nature of intelligence itself, and whether we can ever fully trust what these systems create. In this exploration, Anthropic, the creators of Claude, uncover how LLMs simulate reasoning, the tools researchers use to decode their opaque decision-making, and the challenges that make this task so complex. You'll learn how these models break down problems, why they sometimes produce false but convincing outputs, and how their inner logic can mislead even the experts. Along the way, we'll grapple with the ethical and practical stakes of understanding AI's 'mind,' especially as these systems become increasingly embedded in healthcare, finance, and legal decisions. By the end, you might find yourself questioning not only how AI thinks but also what it means for us to interpret its 'thoughts' at all. Understanding LLM Interpretability How LLMs Process Information Unlike traditional software, LLMs are not programmed with explicit instructions for specific tasks. Instead, they rely on predictive algorithms to determine the most likely next word in a sequence. This predictive approach enables them to perform a wide range of tasks, such as writing poetry, solving math problems, or interpreting nuanced language. However, their 'thinking' is not analogous to human cognition; it is functional and designed to simulate reasoning for achieving predictive accuracy. For example: – When composing a poem, an LLM may internally structure rhymes and meter by combining abstract representations of language patterns. – When solving a math problem, it might break the task into smaller steps, using its training data to arrive at a plausible solution. These capabilities underscore the sophistication of LLMs but also raise questions about the structure and execution of their internal processes. While their outputs may appear intelligent, they are fundamentally the result of statistical pattern recognition rather than genuine understanding. Peering Inside: Tools for AI Interpretability To better understand how LLMs 'think,' researchers employ interpretability tools that analyze their internal workings. These tools focus on: Examining model activations, which reveal how different parts of the model respond to specific inputs. Tracing internal representations of concepts to understand how information is encoded and processed. Mapping decision-making pathways to identify how outputs are generated from inputs. For instance, researchers might manipulate a model's internal states to observe how it generates a specific response or solves a problem. This approach has provided insights into how LLMs handle abstract reasoning, such as planning sequences or synthesizing information from multiple sources. However, these techniques are still in their infancy and capture only a fraction of the complexity within these models. The challenge lies in scaling these tools to match the increasing size and sophistication of modern LLMs. How AI Works : Inside the Mind of AI Watch this video on YouTube. Stay informed about the latest in Large Language Models (LLMs) by exploring our other resources and articles. Challenges in Decoding AI Behavior Despite advancements, significant challenges persist in understanding LLMs. These include: Hallucinations: LLMs sometimes generate plausible but false information, a byproduct of their design to predict likely outputs rather than ensure factual accuracy. LLMs sometimes generate plausible but false information, a byproduct of their design to predict likely outputs rather than ensure factual accuracy. Sycophantic responses: Models may align with user expectations or biases, even when those expectations are incorrect or misleading. Models may align with user expectations or biases, even when those expectations are incorrect or misleading. Misleading explanations: When asked to explain their decisions, LLMs might produce coherent but inaccurate rationales, obscuring their true internal logic. These behaviors highlight the inherent opacity of LLMs and the limitations of current interpretability tools. The complexity of these models often exceeds the capabilities of existing methods to fully map their internal processes. This opacity poses risks, particularly in high-stakes applications where trust and accuracy are paramount. Why Interpretability Matters Understanding how LLMs process information is critical for building trust, especially as these models are increasingly deployed in sensitive and high-stakes domains. Examples include: Healthcare: In automated medical diagnosis, making sure the model's reasoning aligns with clinical standards is essential to avoid harmful outcomes and ensure patient safety. In automated medical diagnosis, making sure the model's reasoning aligns with clinical standards is essential to avoid harmful outcomes and ensure patient safety. Finance: In financial analysis, transparency in decision-making can prevent costly errors and foster confidence in AI-driven systems. In financial analysis, transparency in decision-making can prevent costly errors and foster confidence in AI-driven systems. Legal systems: In legal applications, understanding how an AI arrives at its conclusions is crucial for making sure fairness and accountability. Interpretability research helps identify potential risks, such as deceptive or unintended behaviors, and improves model reliability. By studying how LLMs make decisions, researchers can develop safeguards to enhance transparency and prevent misuse. This is particularly important as AI systems become more integrated into critical aspects of society, where errors or biases could have far-reaching consequences. The Road Ahead: Future Directions in AI Interpretability The future of AI interpretability lies in scaling tools to analyze larger, more advanced models and creating automated systems to assist in decoding their behavior. Researchers are exploring several promising directions, including: Developing AI-powered analysis tools that act as 'microscopes,' offering detailed insights into model decision-making processes. Building transparency frameworks to bridge the gap between human expectations and machine behavior, making sure that AI systems align with ethical and practical standards. Studying how LLMs evolve during training to better understand their internal structures, learning processes, and potential vulnerabilities. Designing interpretability techniques that are scalable and adaptable to future generations of AI models, making sure continued progress in understanding their behavior. These advancements aim to provide a clearer picture of how LLMs process information, allowing developers to design safer and more reliable AI systems. By prioritizing interpretability, researchers can address the challenges posed by increasingly complex models and ensure that AI technologies are used responsibly and effectively. Media Credit: Anthropic Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store