
Volcanic Rocks Reveal How Gold Reaches Earth's Surface
Crystallized gold in rocky matrix.
getty
Gold is a surprisingly common metal (it is more common than lead if we consider the bulk composition of Earth), but more than 99.999 percent of Earth's stores of gold and other precious metals lie buried under 3,000 kilometers of solid rock, locked away within Earth's mantle and metallic core and far beyond the reaches of humankind.
A new study published by researchers from the University of Göttingen suggests that at least some of the supplies of gold and other precious metals that we rely on for their value and applications in modern technology may have come from Earth's core.
Compared to Earth's rocky mantle, the metallic core contains a slightly higher abundance of a particular isotope known as rubidium-100. When Earth formed around 4.5 billion years ago the rubidium was locked in the core together with gold and other precious metals.
Standard rock analysis methods aren't sensitive enough to identify and quantify rubidium isotopes. The researchers were able to amplify the signal by first dissolving rocks in hot acid, condense the resulting steam back to a liquid, and finally measuring the rubidium signal in the concentrated samples.
Analyzing lava from Hawaiʻi, the researchers found an unusually high rubidium-100 signal in the samples. Hawaiʻi's active volcanism is feed by large plumes of molten rock rising up in Earth's mantle. The origin, dynamics and composition of such mantle plumes is still debated among geologists. The rubidium signal suggests that these rocks ultimately originated from the core-mantle boundary.
"Our findings not only show that Earth's core is not as isolated as previously assumed. We can now also prove that huge volumes of super-heated mantle material—several hundreds of quadrillion metric tons of rock—originate at the core-mantle boundary and rise to Earth's surface to form ocean islands like Hawaiʻi," explains study coauthor Professor Matthias Willbold, researcher at Göttingen University's Department of Geochemistry and Isotope Geology.
'When the first results came in, we realized that we had literally struck gold. Our data confirmed that material from the core, including gold and other precious metals, is leaking into Earth's mantle above, ' explains study author Dr. Nils Messling, a geochemist working at the same department, the
Pure gold is inert in Earth's mantle and tends to stay there. However, gold atoms can bound with three sulfur atoms forming a gold-trisulfur complex. This complex is highly mobile in the molten sections of the mantle called magma.
Where material from Earth's core-mantle zone has the opportunity to rise to the surface, like along subduction zones or in a mantle plume, it can mix with sulfur-rich fluids and form gold-bearing magmas.
As the magma rises to the surface, degassing and circulation of hydrothermal fluids further concentrate the gold in veins and clusters, forming a deposit that can be mined.
The study,"Ru and W isotope systematics in ocean island basalts reveals core leakage," was published in the journal Nature.
Additional material and interviews provided by the University of Göttingen.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fox News
42 minutes ago
- Fox News
What your blood quietly reveals about your eating habits
Blood and urine tests have been found to detect the amount of ultraprocessed foods (UPFs) a person eats, according to new research. Using machine learning, scientists at the National Institutes of Health (NIH) identified hundreds of metabolites (molecules produced during metabolism) that correlated with processed food intake. The team developed a "biomarker score" that predicts ultraprocessed food intake based on metabolite measurements in blood and urine, according to Erikka Loftfield, Ph.D., M.P.H., of the National Cancer Institute in Maryland. The researchers drew baseline data from 718 older adults who provided urine and blood samples and reported their dietary habits over a 12-month period, as detailed in an NIH press release. Next, they conducted a small clinical trial of 20 adults. For two weeks, the group ate a diet high in ultraprocessed foods, and for another two weeks they ate a diet with no UPFs. "In our study, we found that hundreds of serum and urine metabolites were correlated with percentage energy from ultraprocessed food intake," Loftfield told Fox News Digital. The findings were published in the journal PLOS Medicine. Large-scale studies investigating the health risks of ultraprocessed foods often rely on self-reported dietary questionnaires, which can be prone to errors, per the NIH. The new blood and urine test helps to reduce human error by using objective biomarkers, a growing area of interest among researchers. Loftfield added, "It was surprising to find that UPF-correlated metabolites are involved in numerous and diverse biological pathways, underscoring the complex impact of diet on the metabolome." Ultraprocessed foods are defined as "ready-to-eat or ready-to-heat, industrially manufactured products, typically high in calories and low in essential nutrients," according to the NIH. Chronic diseases, obesity and various forms of cancer have been linked to diets that are heavy in UPFs. Despite promising results, the researchers emphasized that the new method will require further validation before broader use. Since the current trial focused mainly on older adults, more research is needed across various age groups and diets, the experts said. "Metabolite scores should be evaluated and improved in populations with different diets and a wide range of UPF intake," Loftfield acknowledged. This method could potentially be used in future research to link the consumption of processed foods with chronic diseases, according to the researchers. For more Health articles, visit "For individuals concerned about ultraprocessed food intake, one practical recommendation is to use 'nutrition facts' labels to avoid foods high in added sugars, saturated fat and sodium, as this can limit UPF intake and align with robust scientific research on diet and health," Loftfield suggested.


CNET
42 minutes ago
- CNET
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.


CBS News
44 minutes ago
- CBS News
A misplaced MRI found a tumor on her spine. Doctors removed it through her eye in a first-of-its-kind surgery.
Karla Flores was 18 when she started experiencing double vision. She knew something was wrong but struggled to find a diagnosis. Finally, she saw an ophthalmologist who referred her to a neurosurgeon. Flores, then 19, was diagnosed with a chordoma wrapped around her brain stem. Chordomas are incredibly rare — only about 300 are diagnosed per year in the United States, according to the Cleveland Clinic — and they are slow-growing, malignant tumors. The tumor was putting pressure on multiple of Flores' cranial nerves, said Dr. Mohammed Labib, a neurosurgeon at the University of Maryland Medical Center who led her treatment. Labib developed a complex surgical plan that required two surgeries to remove the tumor without damaging the delicate nerves. When Flores underwent an MRI the morning of the first surgery, the technician positioned the camera a little lower than necessary. When looking at the scans, Labib realized Flores had a second chordoma. This one was at the top of her spine, at the front of her spinal cord. It was wrapped around her spinal cord and had invaded the vertebrae in her neck, Labib said. "They told me if they hadn't seen it, I could have been paralyzed," Flores told CBS News. The two chordomas. Karla Flores was first diagnosed with a chordoma on her brain stem (the center red mass). She was then diagnosed with a second chordoma on her spinal cord (the lower red mass). University of Maryland Medical Center Despite the startling discovery, Labib decided to go ahead with the procedures to remove Flores' first chordoma. The tumor was successfully removed through a traditional neurosurgery and another procedure that went through Flores' nose. In between the procedures, Labib studied the location of the second tumor. In most cases, he would make an incision in the spine to approach the tumor from the back, but the chordoma's location meant that wasn't an option. Going through Flores' nose again wouldn't give him enough space to operate. One colleague even suggested that there was nothing they could do. "I spoke to colleagues, and one of them said 'You're not gonna cure her from this,' basically, maybe she should be more of a palliative care patient," said Labib. Palliative care refers to making a terminally ill patient comfortable for their last days. "I wasn't very enthusiastic about that," Labib added. Developing a unique surgical plan Labib continued to study Flores' scans and look for ways to reach the second chordoma. While considering approaching it through her nose, he realized the cheekbone between her nose and eye was one of the obstacles blocking him from reaching the tumor. That gave him an idea: What if he approached through the side of Flores' eye? He had done it for a surgery several years prior, but never to remove a spinal tumor. Labib spent weeks practicing the technique, which he called the "third nostril" approach, in UMMC's neurosurgery laboratory. He used cadaver heads and skull models to ensure that he could safely reach and operate on the tumor. He and other members of Flores' team spent weeks working through potential problems, including ensuring that they could create a surgical opening without damaging her eyeball and modifying surgical instruments so they would work for the procedure. An illustration of the approach to Karla Flores' chordoma. The left line shows one of the obstacles presented by a nasal approach. The right line shows the "third nostril" approach Flores' surgeons used. University of Maryland Medical Center After he was confident in the approach, Labib told Flores and her family about the plan. "Her mother cried. Karla was emotional. Her father, he's not an emotional guy, but you could see from his silence he was concerned," Labib remembered. Flores said she trusted Labib and approved the surgery. "I was scared I wasn't going to see my parents again, because you never know what's going to happen when you go inside the surgery," she admitted. In the operating room, facial plastic and reconstructive surgeon Dr. Kalpesh T. Vakharia cut through the membrane that protects the eye inside the lower eyelid and removed the bottom of Flores' eye socket and a portion of her cheekbone. That allowed Labib to reach the operating site. It also left Flores with no external scars, Vakharia said. Once the bones were removed, Labib and head and neck surgeon Dr. Andrea Hebert drilled through Flores' vertebrae to reach the tumor. They dissected it carefully, following the procedure they had developed in the lab. By the time they were done, the chordoma was entirely removed, Labib said. "It was perfect," he said. Once that was done, Vakharia rebuilt Flores' eye socket with a titanium plate and rebuilt her cheek with bone from her hip. The process took about 20 hours, Labib said. "Each step is an accomplishment" The chordoma was removed, but Flores still had more treatment to come. A spinal surgeon stabilized the vertebrae that had been affected during the surgery. Six weeks later, she underwent radiation treatment to ensure there was no cancer in her body. Nearly a year after completing treatment, Flores has no evidence of cancer. The treatment was followed by rehabilitative therapy. Flores, now 20, struggles to move her left eye because of nerve damage from the chordoma, but is continuing to work on it in physical therapy. Karla Flores and her cat, Sushi, in 2025. Karla Flores Flores said that when she is feeling better, she wants to go to school to become a manicurist. She has follow-up appointments at UMMC every few months. Her biggest struggle right now is medical bills: She said she owes about $600,000. A GoFundMe has raised just a few thousand dollars. "I keep reminding myself to take one day at a time and know that each step is an accomplishment. I'm also glad I stood my ground and kept looking for help until I found it," Flores said in an emailed statement. Labib said he hopes the procedure can be used to help operate on other difficult-to-reach tumors. "I think this opens a new corridor for approaching tumors that are in the upper cervical (or high part of the) spine," Labib said. "I think this third nostril approach is a smaller, easier and cleaner approach, and I think it's going to really take on these difficult tumors in front of the spinal cord."