
Is there really a secret city under Egypt's pyramids?
Two Italian scientists claim to have discovered 38,000-year-old structures buried deep beneath the pyramids. But there's a big reason to be skeptical. Sun above the pyramid of Khafre at Giza Photograph by Christian Heeb, laif/Redux
For the past few weeks, the internet has been abuzz with stories about a secret city allegedly located under the Pyramids at Giza. A research team led by retired organic chemist Dr. Corrado Malanga and former academic and remote sensing expert Dr. Filippo Biondi, claim to have discovered and reconstructed enormous 38,000-year-old structures buried deep underneath the pyramid of Khafre at Giza.
In a press conference held in Italy, Malanga and Biondi announced that through the development of a new proprietary method for interpreting Synthetic Aperture Radar (SAR) signals, they were able to detect structures two kilometers beneath the Khafre pyramid. According to Malanga and Biondi, they discovered eight shafts, surrounded by spiral pathways, that connect to two 90-meter cube-shaped structures. Above the shafts, they claim to have found five structures connected to one another by passageways. Using what appears to be AI-generated reconstructions, they, and others, have hypothesized that these structures are part of a legendary ancient city or even a prehistoric power-generating structure (i.e. a power station).
Rumors of hidden structures underneath the Giza plateau are nothing new. The idea dates to the ancient Greek historian Herodotus and intermittently bubbled to the surface of popular consciousness throughout the Middle Ages and Renaissance. They became particularly popular among French scholars in the 19th century and again in the 20th century when American psychic Edgar Cayce popularized the idea that a secret hall of records was buried underneath the pyramid complex. The concept of a power station, allegedly built by aliens, has also been bouncing around pseudoscientific circles for a while. It is part of a broader conspiracy theory that credits impressive ancient architectural projects to aliens.
(How cosmic rays helped find a tunnel in Egypt's Great Pyramid.)
This newest iteration of the pyramid conspiracy theory has captured public attention because of the scientific credentials of its authors. In the past Malanga and Biondi published a peer-reviewed article on the internal structure of the Khafre pyramid. Though these newer sensational claims have not been peer-reviewed, and one of the authors is well known for publishing books about aliens, the combination of doctorates and an allegedly new technology has captured public attention. The story went viral and was picked up by InfoWars, Joe Rogan, Piers Morgan, and other critics of 'mainstream archeology.'
'These claims were received by a public primed for such news from long time claims of mysterious, hidden chambers under the pyramid,' says Dr. Flint Dibble, a well-respected archeologist and science communicator who has headed up 3-D digital mapping projects for a large excavation at Abydos in Egypt and teaches at Cardiff University. 'And they appeared legit because of the conflation of peer-reviewed research and the degrees that the scholars hold.'
But as other experts have pointed out, the problem with the lost city hypothesis is that it uses an unproven technology, takes imaginative leaps in its reconstructions, and fails to account for what we know about the archeology of the region.
(Who built the pyramids of Giza?) Shallow Radar Technology
To begin with, there are the methods involved in scanning the ground beneath the Giza plateau. As Dibble and public archeologist Milo Rossi have explained, these methods have never been confirmed or proven, nor have they been independently verified. Synthetic aperture radar only detects up to two meters underground in similar conditions. It is difficult to imagine that SAR is providing credible information about structures 2,000 meters beneath the surface. The pyramids of Giza with Cairo in the foreground. Photograph by Alex Saberi, Nat Geo Image Collection
To be clear, Malanga and Biondi have not discovered a new way of detecting structures two kilometers beneath the ground; instead they claim to have a new method of interpreting these synthetic aperture radar signals. If one compares the images of the radar signals published in the report with the reconstructions they generated, it is clear how much artistic license is being taken in the interpretation of the images. The technology does not allow scientists to create an entire 3-D model or produce the kinds of cross-sections envisioned in the reconstructions. As Dibble joked with Rossi in one podcast, the reconstruction appears to be based on the reactor room of Total Recall.
Alongside public educators like Dibble and Rossi, other established academics have criticized the discovery. Professor Lawrence B. Conyers, an expert in ground-penetrating radar at the University of Denver, told the Daily Mail that the claims of a vast city are 'a huge exaggeration.' Egyptian archeologist Dr. Zahi Hawass, the former Ministry of Antiquities, called the claims 'baseless' and noted that the Egyptian Council of Antiquities did not grant permits for this kind of study to take place in the Khafre pyramid.
Summing up the interpretative and practical issues, Dr. Sarah Parcak, an award-winning scholar at the University of Alabama who uses cutting-edge satellite imagery to better improve our understanding of Egyptian archeology, said, 'I could get any satellite imagery to look almost any way I wanted with enough manipulation… I think that's what these guys, they've done. They've misinterpreted the data. And the satellite imagery … SAR data can't see through rock, period.' Water, Water, Everywhere
More problematic, Dibble explained, is the study's curious avoidance of all the archeological data about Giza plateau that was painstakingly collected over the past two centuries. All these studies, which utilized geochemical analysis, satellite remote sensing, seismic refraction, muon scans, electrical resistivity tomography, ultrasonic testing, ground penetrating radar, and magnetometry, have been carefully checked against one another and in some instances confirmed through excavation and drilling into the bedrock. The cumulative weight of this evidence has led to a robust understanding of what lies beneath the pyramids, how the pyramids were built, and when they were constructed.
The most relevant piece of data here is the water table at Giza. An intensive study performed by Sharafeldin et al in 2019 revealed that the water table at Giza is only a few dozen meters under the surface of the plateau. The proximity of the groundwater, Dibble said, means that even today the Sphinx and other monuments are slowly eroding from water that sometimes 'wicks' up from beneath the ground. What this means for this new study is that if there really were megastructures some 2,000 meters underneath the pyramids, they would always have been part of an underwater city. Think Aquaman's Atlantis, not Amsterdam, Venice, or even the mythical Atlantis that fell into the sea.
(Meet the anti-Indiana Jones solving the pyramids' secrets.)
In general, water is a critical part of understanding the life course of the pyramids. The pyramids were built soon after the end of the African humid period when greater rainfall meant that the Sahara was more like a verdant savannah. A recent study by Sheisha et al in 2022 showed that during the period of construction the Kufu branch of the Nile extended right up to the Giza plateau, facilitating the transport of the stones needed for the construction of the pyramids. We do not need aliens when we have water.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
13 hours ago
- Associated Press
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what 'flower' means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the world. A research team led by Prof. Li Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, explored the similarities between large language models and human representations, shedding new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair 'pasta' and 'roses' might receive equally high olfactory ratings, but 'pasta' is in fact more similar to 'noodles' than to 'roses' when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, 'The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.' Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, 'The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect'. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, 'These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans.' Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.
Yahoo
2 days ago
- Yahoo
Why Rocket Lab Corp. (RKLB) Soared On Friday
We recently published a list of . In this article, we are going to take a look at where Rocket Lab Corp. (NASDAQ:RKLB) stands against other Friday's best-performing stocks. Rocket Lab grew its share prices by 9.34 percent on Friday to finish at $28.92 apiece as investors loaded up portfolios ahead of its launch of a new mission on Tuesday. Rocket Lab Corp. (NASDAQ:RKLB) is scheduled to launch The Mountain God Guards mission for the Institute for Q-shu Pioneers of Space, Inc. (iQPS), a Japan-based Earth imaging company, through 'Electron,' the world's most frequently launched orbital small rocket. The mission will launch a single synthetic aperture radar imaging satellite called QPS-SAR-11 to a 575-kilometer circular Earth orbit, which will join the rest of the iQPS constellation in providing high-resolution images and Earth monitoring services globally. A launch pad atop a grassy hill, smoke filled sky from a successful voyage to space. The launch will take place at Rocket Lab Corporation's (NASDAQ:RKLB) Launch Complex 1 in New Zealand. Rocket Lab Corporation (NASDAQ:RKLB) said that The Mountain God Guards will mark its 8th mission for this year alone, its 4th out of the 8 missions dedicated to iQPS, its 66th Electron launch overall, and the 227th satellite delivered to space. Overall, RKLB ranks 9th on our list of Friday's best-performing stocks. While we acknowledge the potential of RKLB as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.
Yahoo
2 days ago
- Yahoo
Why Rocket Lab Corp. (RKLB) Soared On Friday
We recently published a list of . In this article, we are going to take a look at where Rocket Lab Corp. (NASDAQ:RKLB) stands against other Friday's best-performing stocks. Rocket Lab grew its share prices by 9.34 percent on Friday to finish at $28.92 apiece as investors loaded up portfolios ahead of its launch of a new mission on Tuesday. Rocket Lab Corp. (NASDAQ:RKLB) is scheduled to launch The Mountain God Guards mission for the Institute for Q-shu Pioneers of Space, Inc. (iQPS), a Japan-based Earth imaging company, through 'Electron,' the world's most frequently launched orbital small rocket. The mission will launch a single synthetic aperture radar imaging satellite called QPS-SAR-11 to a 575-kilometer circular Earth orbit, which will join the rest of the iQPS constellation in providing high-resolution images and Earth monitoring services globally. A launch pad atop a grassy hill, smoke filled sky from a successful voyage to space. The launch will take place at Rocket Lab Corporation's (NASDAQ:RKLB) Launch Complex 1 in New Zealand. Rocket Lab Corporation (NASDAQ:RKLB) said that The Mountain God Guards will mark its 8th mission for this year alone, its 4th out of the 8 missions dedicated to iQPS, its 66th Electron launch overall, and the 227th satellite delivered to space. Overall, RKLB ranks 9th on our list of Friday's best-performing stocks. While we acknowledge the potential of RKLB as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.