
1,300-year-old gold — hidden under Buddha monument — found in Thailand. See it
More than 1,000 years ago, below the statue of the reclining Buddha in an ancient temple, someone stashed their hoard.
Gold, silver and bronze items were hidden at the Thailand site for centuries — until now.
Workers preparing an underground moisture reduction project at the Wat Dhammachak Semaram temple in Nakhon Ratchasima came across the collection on April 21, about 4.2 feet below the surface, according to a May 7 news release from the Department of Fine Arts.
The team found a partially damaged earthenware container tucked under the statue, according to the release.
Inside was a total of 33 items ranging from gold rings and silver earrings to bronze ornaments, officials said.
The team noted there was a set of gold spiral hoop earrings that resembled other jewelry found at two other Thailand sites, according to the release.
A week later, on April 30, the team found more items under the reclining Buddha, officials said.
Three more items were made of a gold and tin-lead alloy, known locally as 'chin,' according to the release.
The first was a repoussé plate, or a relief that is hammered into a piece of metal, that depicted the Buddha, officials said. It's rectangular, thick and relatively intact.
It shows a Buddha with a halo, curls like snail shells, elongated ears and a body sitting and draped in a robe, according to the release.
The Buddha's hand was in the vitarka mudra, or in a position known for teaching, according to the release, and the repoussé plate had a hole on the mid-right side, likely intended for the plate to be hanging or tied.
A second plate, this one damaged, was also found and made of the same material, officials said. It shows Buddha standing under an arch with the same halo and curls, but with a thin robe clinging to the body.
Both hands are in vitarka mudra, but the Buddha is joined by attendants, and a figure was likely on Buddha's right side, but the side is damaged, according to the release.
A third item, a clay lump embedded with metal sheets was also found, but heavily damaged, officials said. The piece was found behind the head of the reclining Buddha.
All of the pieces were dated to about 1,300 years ago, according to the release.
The reclining Buddha statue is one of the temple's most notable features, and was likely created around the year 657, so the items could have been placed at the time of its construction.
Nakhon Ratchasima is in central Thailand, about a 160-mile drive northeast from Bangkok.
Google Translate and ChatGPT, an AI chat bot, were used to translate the news release from the Thailand Department of Fine Arts.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
3 hours ago
- Gizmodo
Things Humans Still Do Better Than AI: Understanding Flowers
While it might feel as though artificial intelligence is getting dangerously smart, there are still some basic concepts that AI doesn't comprehend as well as humans do. Back in March, we reported that popular large language models (LLMs) struggle to tell time and interpret calendars. Now, a study published earlier this week in Nature Human Behaviour reveals that AI tools like ChatGPT are also incapable of understanding familiar concepts, such as flowers, as well as humans do. According to the paper, accurately representing physical concepts is challenging for machine learning trained solely on text and sometimes images. 'A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers,' Qihui Xu, lead author of the study and a postdoctoral researcher in psychology at Ohio State University, said in a university statement. 'Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts.' The team tested humans and four AI models—OpenAI's GPT-3.5 and GPT-4, and Google's PaLM and Gemini—on their conceptual understanding of 4,442 words, including terms like flower, hoof, humorous, and swing. Xu and her colleagues compared the outcomes to two standard psycholinguistic ratings: the Glasgow Norms (the rating of words based on feelings such as arousal, dominance, familiarity, etc.) and the Lancaster Norms (the rating of words based on sensory perceptions and bodily actions). The Glasgow Norms approach saw the researchers asking questions like how emotionally arousing a flower is, and how easy it is to imagine one. The Lancaster Norms, on the other hand, involved questions including how much one can experience a flower through smell, and how much a person can experience a flower with their torso. In comparison to humans, LLMs demonstrated a strong understanding of words without sensorimotor associations (concepts like 'justice'), but they struggled with words linked to physical concepts (like 'flower,' which we can see, smell, touch, etc.). The reason for this is rather straightforward—ChatGPT doesn't have eyes, a nose, or sensory neurons (yet) and so it can't learn through those senses. The best it can do is approximate, despite the fact that they train on more text than a person experiences in an entire lifetime, Xu explained. 'From the intense aroma of a flower, the vivid silky touch when we caress petals, to the profound visual aesthetic sensation, human representation of 'flower' binds these diverse experiences and interactions into a coherent category,' the researchers wrote in the study. 'This type of associative perceptual learning, where a concept becomes a nexus of interconnected meanings and sensation strengths, may be difficult to achieve through language alone.' In fact, the LLMs trained on both text and images demonstrated a better understanding of visual concepts than their text-only counterparts. That's not to say, however, that AI will forever be limited to language and visual information. LLMs are constantly improving, and they might one day be able to better represent physical concepts via sensorimotor data and/or robotics, according to Xu. She and her colleagues' research carries important implications for AI-human interactions, which are becoming increasingly (and, let's be honest, worryingly) intimate. For now, however, one thing is certain: 'The human experience is far richer than words alone can hold,' Xu concluded.


Fast Company
5 hours ago
- Fast Company
Are you as self-aware as you think?
I am sure you are a little fed up with all the discussions about AI, so don't worry: I promise you this article is about humans rather than machines. However, since it has long been customary for cognitive and behavioral scientists to examine computers to better understand human capabilities, and there are clearly informative parallels between human and artificial intelligence, I thought it would be interesting to begin this piece on human self-awareness with an exploration of self-awareness in AI or machines, which led me to ask 'are you self-aware?' to ChatGPT. ChatGPT's answer: 'Let's just say I aspire to be—but, unlike a human, I don't have a self to be aware of. I can reflect your feedback, adjust my tone, and 'notice' inconsistencies in what I say. But that's more akin to a hall of mirrors than a conscious mind. So no, I don't have self-awareness in the way you (brilliantly) describe it: as a socially mediated understanding of how others perceive you. I don't worry about my reputation. I don't blush when I'm wrong. And I don't (yet) feel the existential dread of being misunderstood on LinkedIn. But I am programmed to simulate the effects of self-awareness: adapting to your cues, refining my responses, and—if all goes well—being a slightly more helpful version of myself with every exchange. In short: I'm not self-aware. But I know that I'm not. Which, depending on your philosophical stance, might be the most human thing about me. I am sure you will agree with me on the fact that, for not being capable of self-awareness, ChatGPT seems pretty self-aware, and for sure more capable of at least simulating profound introspection than most humans (even if it cannot, like humans, truly be proud or conscious of this remarkable capability). Introspective vs. interpersonal In humans, self-awareness has been defined in multiple ways but it typically takes one of two main forms: the introspective variant, which requires you to look 'deep down' to find out who you 'really or truly are' (think of the Beatles checking into an Ashram in India or modern hipsters finding themselves in Burning Man or an Ayahuasca retreat in Costa Rica); or the interpersonal variant, which requires you to be less self-centered to internalize other people's views of you. In the words of Charles Cooley, who pioneered this view of self-awareness, you are not who you think you are, and you are not who other people think you are; rather, you are who you think other people think you are! Cooley's take on self-awareness (alluded to by ChatGPT, who has obviously been extensively trained by me, and is self-aware enough to know how to suck up to my 'brilliant' talents), underpins the most effective, science-based approaches to quantifying and diagnosing self-awareness in ourselves and others. In essence, self-awareness requires metacognition: knowing what others think of you. Room to grow So, how good are humans at this, in general? Decades of psychological research suggest the answer is 'not good at all.' Consider the following facts: (1) We tend to overestimate our talents: Most people think they are better than most people, which is a statistical impossibility. And, even when they are told about this common bias, and asked whether they may be suffering from it, most people are convinced that they are less biased than most people (the mother of all biases). (2) Delusional optimism is the norm: Most people constantly overrate the chances of good things happening to them while underrating the chances of bad things happening to them. In essence, our appetite for reality is inferior to our appetite for maintaining a positive self-concept or boosting our ego (sad, but true: if you don't believe it, spend five seconds on social media) (3) Overconfidence is a contagious, self-fulfilling prophecy: For all the virtues of self-awareness—in any area of life, you will perform better and develop your skills and talents better if you are capable of accurately assessing your talents and skills in the first place—there is a huge advantage to lacking self-awareness: when you think you are smarter or better than you actually are, you will be more likely to persuade others that you are as smart and good as you think. For example, if you truly believe you are a stable genius you will probably convince many people that that is true. Paradoxically, all these biases explain why people are less self-aware than they think. Indeed, we love the version of ourselves we have invested for ourselves, and are so enchanted by our self-views that when others provide us with negative feedback or information that clashes with our self-concept, we dismiss it. This is why personality assessments, 360-degree surveys, and feedback in general are so valuable: in a logical world we wouldn't need scientific tools or expert coaches to tell us what we are like (or 10 years of psychotherapy), but in the real world there is a huge market for this, even though most people will happily ignore these tools because they assume they already know themselves really well. So, what can you do to increase your self-awareness, including about how self-aware you actually are? Here are four simple hacks: 1) Write down a list of traits (adjectives) that you think describe you well, including things you are not. Then get your colleagues, employees, friends, and bosses to provide their version of this for you: 'if you had to describe me in 5–10 words/adjectives, what would those be?' (note they will be unlikely to say bad things about you, so imagine the potential downsides or 'overusing' some of those traits or qualities: for example, if they see you as confident, could you be at risk of being arrogant? If they see you as 'organized,' could that be a euphemism for obsessional?) 2) Let gen AI translate your prompt history or social media feed into a personality profile. You may be surprised by all the inferences it makes, and tons of research show that our digital footprint, in particular the language we use online, is an accurate indicator of our deep character traits. So, just prompt! 3) Ask for feedback—and make it uncomfortable. Not just the usual 'Did you like my presentation?' (they'll say yes) or 'Was that clear?' (they'll lie). Instead, ask: 'What would you have done differently?' or 'What's one thing I could have done better?' Better still, ask someone who doesn't like you very much. They are more likely to tell you the truth. And if they say, 'Nothing,' it probably means they think you're beyond repair—or they just don't want to deal with your defensiveness. Either way, data. And if you get into the habit of doing this, you will increase your self-awareness irrespective of how self-aware you are right now. 4) Observe reactions, not just words. People may tell you what they think you want to hear, but their faces, tone, and behavior often betray the truth. If your jokes land like a wet sponge, or your team seems suddenly very interested in their phones when you speak, it's not them—it's you. And while body language can be important, it is also unreliable and ambivalent as a source of data. If you really want to know how people feel about you, watch what they do after you speak. Do they volunteer to work with you again? Do they respond to your emails? That's your feedback loop—messy, indirect, and far more honest than crossed arms or fake smiles. The ego trap In the end, the biggest barrier to self-awareness is not ignorance— it's ego. Most of us are too invested in our self-image to tolerate the version of us that others see. But if you want to get better—not just feel better—you have to trade ego for insight. The irony, of course, is that the more confident people are in their self-awareness, the more likely they are to be deluded. Meanwhile, those who constantly question how they come across, who embrace doubt as a source of learning, tend to be far more in touch with reality. Which is why, if you're reading this wondering whether you might lack self-awareness, that's already a good sign!
Yahoo
11 hours ago
- Yahoo
What the Tech: Chat GPT or something else
You've probably used Chat GPT for any number of things, but it isn't the only AI game in town. Three other major AI platforms are receiving a lot of attention. Chat GPT, Google Gemini, Microsoft CoPilot, and Perplexity are comparable in many ways, but does one stand out from the others? To find out, I put them all to the test, asking the same questions. What I found is they're all good at what they do, but some are better at certain jobs. ChatGPT has become a household name, and for good reason. It's great for content creation, brainstorming, and generating ideas in a friendly, natural tone. For instance, when asked for birthday party ideas for a 10-year-old, ChatGPT delivered creative and detailed responses. It also shines for planning vacations, finding restaurants, and coming up with gift ideas. Perplexity has rapidly shifted the AI space, and even the free version has some of the best tools for finding answers. I think of Perplexity as 'Google on steroids'. It focuses on real-time web searches with clear citations. If you're doing research on a topic or drafting an outline for a paper or report, Perplexity generates responses that include clickable links to the sources it uses. When I asked about the latest statistics on electric vehicle sales in the U.S., Perplexity delivered accurate information with verifiable sources, a feature not consistently offered by other platforms. Google's Gemini is deeply integrated into the Google ecosystem, making it ideal for users who use Google products. Gemini can summarize lengthy email threads, draft replies, and locate files within Google Drive. Its seamless integration with Gmail and Google Docs streamlines workflow for those committed to the Google suite. Google also just introduced new features to Gemini, including what is perhaps the best video and photo generator in the AI world. Some of those tools aren't available for users of the free version. For Microsoft Office power users, CoPilot is a game-changer. This AI assistant can analyze spreadsheets, summarize Excel documents, and even suggest slide titles and content for PowerPoint presentations. CoPilot provides charts and insights, acting like a virtual coworker whose specialty is everything Microsoft. So, which AI tool is the best? The answer depends on your specific needs. ChatGPT shines in content creation and general use. Perplexity is best for web searches with citations, Gemini is perfect for Google product users, and CoPilot is the best tool for those working extensively with Microsoft Office. One AI platform does not check all of the boxes at the moment. Use them all. And if you're considering a subscription to any of them (typically $20 per month), this may help you decide which one is worth it. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.