
Platinum And Palladium: The Next Bubble Minerals
I last wrote about platinum for Forbes on June 23rd.
This was the chart:
The palladium chart on June 23rd
When I first wrote about it here on May 15th, platinum was around $1,000.
It is now at $1,400.
This is the current situation:
The current palladium chart
Certainly, a trend of appreciating precious metals is at work, but there is also the theme of 'strategic and critical metals' anxiety pushing platinum. On top of that, under the hood, is AI – not the AI under the hood of cars, but AI as a core driver of market moves. AI means that the internal combustion engine is not going away because the implied future demand for energy is now infinite. This is because the demand for AI is infinite, and AI depends heavily on silicon and pure energy consumption. As such, all forms of energy will be utilized, which is why, if you wondered, nuclear energy is no longer the scourge of humanity and back in favor like it was in the 1950s. As the world continues to burn carbon-based fuels alongside all the other energy-generating materials, you better have platinum to try to clean up the mess.
You also better have palladium.
The trouble is, only roughly 200 tonnes of each metal are mined every year. That is basically none. Even precious and rare gold has a production of 3,200 tonne a year. I focus on these numbers because they are so simple and stark. 200 tonnes is about what 100 American families consume by weight of stuff a year. It's a rounding error in the modern economic scheme of things. The price of platinum and palladium must rise.
It is quite a tricky prospect to ride this near certainty, and as such I hold ETFs and Sibanye Stillwater (NYSE:SBSW). I'm not fan of pricey or difficult-to-hold stocks. This is why physical ETFs are my preferred instrument, and SBSW sneaks in as the only stock I can stomach. Meanwhile, the chartist in me sees palladium as the sweet spot. Here is why:
Platinum and palladium charts compared
In old market parlance, palladium is the fast horse.
But like the market stall trader selling steak knives I have to say, 'That's not all, Mister, take a look at this!'
Platinum, palladium and gold charts compared
This chart tells a story of platinum and palladium following gold. You expect that, but what you see in 2016 was how gold and platinum used to be valued: they were 1:1. That parity is gone, but why wouldn't it return? Production is 3,200 tonnes for gold and 200 tonnes for platinum, and for a very long time, platinum was more expensive than gold – just as it is for Rolex owners today.
From this chart, you can also see how their prices can explode, as palladium did in 2020–2022. The supply is tight because production is so small.
In consequence, I am very much talking my own book because I am well vested in platinum and palladium. This technical setup looks spectacular, and the fundamentals are there to back it up. When I started writing about platinum and palladium, it was contrarian. Not anymore.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 minutes ago
- Yahoo
SoftBank's $500B AI Moonshot Hits Speed Bump--But Son Isn't Slowing Down
SoftBank (SOBKY) just pulled off a sharp pivot back to profitabilityand AI is doing the heavy lifting. The Japanese conglomerate posted a 421.82 billion ($2.9 billion) net profit for the fiscal first quarter, more than doubling analyst expectations. The Vision Fund alone delivered 451.39 billion in gains, driven by rising valuations in holdings like Coupang, Symbotic, and Swiggy. But the real momentum came from SoftBank's growing exposure to Nvidia (NASDAQ:NVDA), where a $3 billion stake rode a 46% rally in the chipmaker's shares. Add in another $3 billion asset sale from T-Mobile US and the firm now has more ammunition to chase Masayoshi Son's next AI frontier. Warning! GuruFocus has detected 11 Warning Signs with SOBKY. That frontier? A sweeping, high-stakes plunge into artificial intelligence infrastructureled by the $500 billion Stargate data center initiative in the U.S., in partnership with OpenAI, Oracle (NYSE:ORCL), and Abu Dhabi's MGX. Son is also pursuing a $6.5 billion acquisition of Ampere Computing and reportedly eyeing up to $30 billion in OpenAI funding. But execution isn't running on schedule. For the first time, SoftBank's CFO admitted that Stargate is delayed. Market volatility, U.S. trade policy uncertainties, and wavering AI hardware valuations have slowed the rollout. Still, SoftBank insists talks on its first Stargate project are about to beginand says it's now "all in" on AI across its entire operation. Even so, questions around funding firepower are starting to surface. SoftBank's loan-to-value stood at 17% as of June, but with Arm (NASDAQ:ARM) shares slipping and big-ticket investments stacking up, analysts see a potential breach of its 25% ceiling. Regulatory hurdles are also in play: the Ampere deal still awaits U.S. antitrust clearance, and the full OpenAI check may never be written unless OpenAI formally restructures into a for-profit entity. Despite all that, SoftBank shares just closed at a record highbuoyed by Trump's chip tariff push that could favor U.S.-based AI plays. It's a bold bet, but not without speed bumps. Investors are watching closely to see if Son can execute at scale without losing grip on the balance sheet. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
25 minutes ago
- Yahoo
U.S. Reportedly Uses Trackers In AI Chip Shipments
Nvidia (NASDAQ:NVDA) is back in the spotlight after a Reuters report said U.S. authorities have secretly placed location trackers in select AI server shipments to catch illegal diversions to China. The devices apply only to shipments under investigation and are meant to help build cases against people and companies that break export rules. People in the supply chain say they have seen trackers on some Dell Technologies (NYSE:DELL) and Super Micro (NASDAQ:SMCI) servers carrying Nvidia and AMD (NASDAQ:AMD) chips, often tucked into packaging and sometimes inside the servers. Warning! GuruFocus has detected 5 Warning Signs with NVDA. Nvidia declined to comment. Super Micro said it does not disclose security practices. Dell said it is not aware of a government initiative to place trackers in its shipments. Authorities can use administrative approval or a judge's warrant, and companies may consent if they are not targets. How often this happens is unclear. Two China based resellers said they now check diverted shipments for trackers. In a 2024 example, a Dell server shipment with Nvidia chips reportedly had large devices about the size of a smartphone and smaller discreet trackers, and a DOJ affidavit described instructions to inspect Quanta H200 servers. covert tracking raises the enforcement bar and could slow gray market tighter export policing and watch for follow on cases and routing changes. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Is Baby Grok really for kids? X's AI past says otherwise
Elon Musk's xAI unveils plans for Baby Grok, a kid-friendly AI chatbot, amid questions over safety, trust, and Grok's controversial past Elon Musk's xAI is reportedly developing 'Baby Grok', a child-friendly version of its Grok AI chatbot. Early reports suggest that it could serve as an educational assistant for children, answering questions and guiding learning in a safe and age-appropriate manner. But creating a truly trustworthy AI for kids isn't just about censoring bad words; it requires rebuilding the AI's training and safety systems from the ground up, ensuring it can protect young users from harmful, biased, or misleading content. With AI now present in classrooms, homes, and even children's toys, the stakes are high. If Baby Grok succeeds, it could become a valuable tool for modern education. If it fails, it could become another high-profile example of AI safety gone wrong. So, how exactly would Baby Grok work? What challenges will it face, and can Musk's team really make it safe enough for kids? Let's dig into the details. Why is Elon Musk building Baby Grok now? Musk's AI company, xAI, launched Grok in 2023 as a bold alternative to ChatGPT, integrating it directly into X (formerly Twitter) for premium subscribers. Grok quickly made headlines for its humor, quick wit, and intentionally 'edgy' tone, a style that won over some fans but also drew sharp criticism when the AI produced inappropriate or offensive responses to user prompts. The pivot toward Baby Grok mirrors a broader industry shift toward AI in education. Tech giants like OpenAI, Google, and Meta are racing to build AI-driven tutoring systems, interactive study aids, and personalized learning companions. The global AI in education market is projected to grow from USD 3.79 billion in 2022 to USD 20.54 billion in 2027, at a CAGR of 45.6% from 2022 to 2027. For Musk, a 'safe-mode' version of Grok could serve multiple purposes: appealing to parents who are wary of unfiltered AI, offering schools a controlled digital learning assistant, and easing concerns from regulators pushing for stricter AI safety laws. At the same time, it could shield xAI from the brand damage that unmoderated, free-form AI can cause, something Grok's initial rollout proved all too well. What Baby Grok promises, and how it would work? Baby Grok is designed to be a smart, child-friendly learning assistant, not just a filtered version of Grok. Its key goals include. Translating complex topics into age-appropriate explanations Presenting subjects like math, science, or history in ways that match a child's comprehension level. Avoiding profanity, explicit content, and violence Going beyond simple word filters, ensuring context-sensitive awareness to keep conversations safe and respectful. Offering interactive learning games and storytelling Features like games and stories can make learning fun while supporting skill development, an approach that has been shown to improve engagement and emotional growth in children. Encouraging curiosity without overwhelming Providing thoughtful, manageable responses that spark interest without cognitive overload. Creating this experience takes more than flipping a 'safe mode' switch; it requires filtered training data, child-centered design, and continuous oversight to ensure Baby Grok is both safe and genuinely educational. How would Baby Grok actually keep conversations kid-friendly? Musk hasn't released technical specifications yet, but AI safety researchers stress that a truly safe children's AI requires a blend of technology, human oversight, and continuous updates: Custom training dataset: Rather than being trained on open internet data, which often includes adult, violent, or misleading content, Baby Grok would need a highly curated, education-first dataset (such as National Geographic Kids, BBC Bitesize, or NASA's Climate Kids). Real-time content filtering: Outputs should pass through advanced filters similar to Microsoft's Content Safety API or Google SafeSearch, which automatically block inappropriate language, explicit imagery, or unsafe suggestions. Human moderation: Since no filter is flawless, platforms like Roblox and Discord rely on moderation teams to review flagged content. Baby Grok would need the same level of vigilance. Adaptive safeguards: Online risks evolve fast. Ongoing updates guided by child psychologists, researchers, and organizations like eSafety are essential to keeping the AI responsive to emerging threats. Importantly, a 2025 benchmark study introduced MinorBench, a tool specifically designed to assess language models' compliance with child safety safeguards, and it revealed significant variability even among top AI systems like Grok in how well they refuse unsafe or inappropriate queries. Can an edgy adult AI be remade for children? Musk's challenge is credibility. Grok's 'uncensored' style was originally marketed as a feature for adults, a stark contrast to the thoughtful tone parents expect from child‑friendly tools. Platforms like YouTube Kids demonstrate how even strict filters can fail, and with generative AI, the risk is amplified; every response is created in real time, not pulled from a pre‑approved library. Researchers at the Center for Humane Technology emphasize that without deep safety integration, child-facing AI can still reproduce bias, misinformation, or harmful advice. A recent framework introduced in 2025, 'LLMs and Childhood Safety', underscores this risk by proposing protective measures for language models used with children, pointing out that current systems often lack the standards needed to reliably prevent harmful outputs, even when they're designed for younger audiences. Before you dive in, take a moment to watch this short video for a quick overview. It sets the stage perfectly. Once you're done, scroll back up and keep reading for the full story. Why is building AI for kids more complex than it looks? Developing AI for children involves meeting stricter legal and ethical standards that go well beyond typical AI design. First, there are data privacy protections. Tools like Baby Grok must comply with stringent child-specific laws such as COPPA in the U.S. and GDPR-K in Europe, which demand explicit parental consent and careful handling of minors' data. Equally important is bias reduction, ensuring the AI avoids reinforcing harmful stereotypes and treats all children fairly, regardless of background. Then there's the need for educational accuracy, meaning the AI's responses must be grounded in trusted sources like Britannica Kids or Scholastic Learn at Home. A 2025 study published in Nature Humanities and Social Sciences Communications reinforces that ethical design, transparency, and verified educational content are critical for AI systems targeting children. The challenge is not just building a smart tool but earning lasting trust from parents, educators, and regulators alike. What's next for Baby Grok? If xAI follows its usual rollout strategy, Baby Grok may begin in a closed beta reserved for Premium+ X subscribers. The company could then expand through integrations with classroom platforms or learning devices, partnerships with educational publishers, and features such as parent dashboards that offer conversation logs and usage controls. But success will largely depend on early safety testing. A single viral misstep, such as a child receiving harmful advice, could severely damage its reputation before it's widely adopted. A 2025 study on AI educational tools emphasizes the importance of establishing strong safety guardrails from the start, including prompt engineering, human-in-the-loop moderation, and continuous evaluation to ensure content remains curriculum-aligned and age-appropriate Can Baby Grok handle the weight of childhood? Baby Grok could redefine AI in education if it balances innovation with strong safety and transparency. But Musk's history of pushing fast, sometimes at the cost of polish, raises questions. Baby Grok represents xAI's bold move into child-focused AI amid rising demand for educational tech. Building a safe AI for kids isn't easy; it requires more than filters, involving retraining, oversight, and ethical design. Musk's credibility is under scrutiny, especially given Grok's edgy origins and controversial responses. Past platform failures show the risks of repurposing adult AI for children without deep safeguards. Trust will be the ultimate test, from parents, educators, and regulators alike. If successful, Baby Grok could set a new standard for child-safe AI. If not, it may serve as a cautionary tale in the history of AI development. The key issue isn't just 'Can Baby Grok be built?' but 'Will parents trust it?'. If xAI meets the highest safety benchmarks and proves its reliability through independent audits, it could set a new standard for kid-friendly AI. If not, it risks becoming yet another cautionary tale in the growing list of AI examples. This story is made with AI assistance and human editing. Solve the daily Crossword