
Crude Reality: Oil Demand Growth Falls After Tariffs
Oil demand growth outlooks have taken a heavy hit in recent weeks. In the wake of the Trump administration's 'Liberation Day' tariff announcements, US business and consumer sentiment tanked and global GDP expectations were trimmed, with knock-on effects for oil demand. On the other side of the globe, China's demand for gasoline is falling fast, with domestic policy stimulating an already-buoyant electric vehicle sector. Yet these blows to demand come at a time when OPEC+ and other major oil producing nations are looking to raise production levels. So what does this mean for the global oil sector and the price of a barrel of oil? On today's show, Tom Rowlands-Rees is joined by Wayne Tan, BloombergNEF's head of oil markets research, to discuss the recent note 'Oil Markets Monthly: Tariffs, OPEC+ Hike, Structural Shift'.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
8 minutes ago
- Axios
Behind the Curtain: The scariest AI reality
The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work. Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown. None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever. Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown. The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision. Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now. The big picture: Our purpose with this column isn't to be alarmist or " doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box. Let's start with a basic overview of how LLMs work, to better explain the Great Unknown: LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do. Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'" "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose." Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action. Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons. OpenAI's Sam Altman and others toss around the tame word of " interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year. What Altman and others mean is they can't interpret the why: Why are LLMs doing what they're doing? Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities. Anthropic has been studying the interpretability issue for years, and Amodei has been vocal about warning it's important to solve. In a statement for this story, Anthropic said: "Understanding how AI works is an urgent issue to solve. It's core to deploying safe AI models and unlocking [AI's] full potential in accelerating scientific discovery and technological development. We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward. It's crucial we understand how AI works before it radically transforms our global economy and everyday lives." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.") Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok. "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad." Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested. The study found that state-of-the-art models (OpenAI's o3-min, DeepSeek R1 and Anthropic's Claude-3.7-Sonnet) still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero "beyond certain complexities." But a new report by AI researchers, including former OpenAI employees, called " AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies. It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly. You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger. The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and " improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.


Bloomberg
13 minutes ago
- Bloomberg
Trump's Tariff Chaos Threatens His Push for Rust Belt Revival
President Donald Trump's signature trade policy is threatening to backfire by upending other top priorities: the revival of US manufacturing and the American Rust Belt. In Illinois, Trump's tariffs prompted a compressor maker to delay a key equipment purchase after an ambitious factory revamp. Rockwell Automation Inc., a Wisconsin-based producer of factory tools, says some manufacturers are putting projects on hold because of uncertainty over costs and future demand. Snap-on Inc. is seeing similar hesitancy among car mechanics.

16 minutes ago
NATO chief Rutte calls for 400% increase in the alliance's air and missile defense
LONDON -- LONDON (AP) — NATO members need to increase their air and missile defenses by 400% to counter the threat from Russia, the head of the military alliance plans to say on Monday. Secretary-General Mark Rutte will say during a visit to London that NATO must take a 'quantum leap in our collective defense' to face growing instability and threats, according to extracts released by NATO before Rutte's speech. Rutte is due to meet U.K. Prime Minister Keir Starmer at 10 Downing St. ahead of a NATO summit in the Netherlands where the 32-nation alliance is likely to commit to a big hike in military spending. Like other NATO members, the U.K. has been reassessing its defense spending since Russia's full-scale invasion of Ukraine in February 2022. Starmer has pledged to increase British defense spending to 2.5% of gross domestic product by 2027 and to 3% by 2034. Rutte has proposed a target of 3.5% of economic output on military spending and another 1.5% on 'defense-related expenditure' such as roads, bridges, airfields and sea ports. He said last week he is confident the alliance will agree to the target at its summit in The Hague on June 24-25. At the moment, 22 of the 32 member countries meet or exceed NATO's current 2% target. The new target would meet a demand by President Donald Trump that member states spend 5% of gross domestic product on defense. Trump has long questioned the value of NATO and complained that the U.S. provides security to European countries that don't contribute enough. Rutte plans to say in a speech at the Chatham House think tank in London that NATO needs thousands more armored vehicles and millions more artillery shells, as well as a 400% increase in air and missile defense. 'We see in Ukraine how Russia delivers terror from above, so we will strengthen the shield that protects our skies,' he plans to say. 'Wishful thinking will not keep us safe. We cannot dream away the danger. Hope is not a strategy. So NATO has to become a stronger, fairer and more lethal alliance.' European NATO members, led by the U.K. and France, have scrambled to coordinate their defense posture as Trump transforms American foreign policy, seemingly sidelining Europe as he looks to end the war in Ukraine. Last week the U.K. government said it would build new nuclear-powered attack submarines, prepare its army to fight a war in Europe and become 'a battle-ready, armor-clad nation.' The plans represent the most sweeping changes to British defenses since the collapse of the Soviet Union more than three decades ago.