
Stocks drift before tech earnings as Fed drama confounds
TSMC (2330.TW), opens new tab, the world's main producer of advanced AI chips, is expected to post a jump in second-quarter profit to record levels, though U.S. tariffs and a strong Taiwan dollar could weigh on its outlook. Profits for streaming giant Netflix (NFLX.O), opens new tab, due later on Thursday, are also on investors' radar.
"With Netflix having outperformed the S&P 500 year-to-date by a sizeable 33 percentage points, and the street fully subscribed to the bullish investment case, Netflix will need to blow the lights out with a solid beat and raise," said Chris Weston, head of research at Pepperstone.
MSCI's broadest index of Asia-Pacific shares outside Japan (.MIAPJ0000PUS), opens new tab was up just 0.06% and the Nikkei (.N225), opens new tab slipped 0.24%.
Canadian retailer Alimentation Couche-Tard (ATD.TO), opens new tab said on Wednesday it was withdrawing its $47 billion takeover bid for Seven & i Holdings (3382.T), opens new tab, citing a lack of constructive engagement by the Japanese retailer.
Shares of Seven & i Holdings (3382.T), opens new tab fell 9%.
European futures jumped as EUROSTOXX 50 futures rose 0.56% and FTSE and DAX futures added about 0.4% each.
Nasdaq futures and S&P 500 futures fell 0.1% each.
Also dominating the market mood was confusion over Fed Chair Powell's future at the central bank, after initial news that U.S. President Donald Trump was likely to fire Powell soon sent stocks and the dollar sliding.
Trump was quick to deny the reports, restoring some calm to volatile markets, but he kept the door open to the possibility and renewed his criticism of the central bank chief for not lowering interest rates.
"I think the most likely outcome is for Powell to stay on until the end of his term next year. Having said that, this is not the first time, so there are going to be episodes of volatility in the dollar as a result of political noise," said Carlos Casanova, UBP's senior economist for Asia.
The dollar was on a fragile footing on Thursday, after having lost ground overnight on worries that the Fed's independence could come under threat.
The euro was last down 0.17% at $1.1620 while sterling eased 0.13% to $1.3400 after both currencies made gains in the previous session.
The dollar was little changed at 98.49 against a basket of currencies, having lost 0.33% overnight.
U.S. Treasury yields also steadied after falling on Wednesday, due to expectations that Powell's removal could lead to quicker and deeper rate cuts, with the two-year yield last at 3.9022%.
The benchmark 10-year yield was little changed at 4.4673%.
In Japan, yields on government bonds rose on Thursday as investors extended a selloff driven by fiscal concerns ahead of a closely watched upper house election on Sunday. Bond yields move inversely to prices.
"Regardless of the outcome of the election, we are going to see additional fiscal spending coming out of Japan," said UBP's Casanova.
Elsewhere, oil prices rose on Thursday, with Brent crude futures up 0.47% at $68.84 a barrel. U.S. crude futures gained 0.62% to $66.79.
Spot gold dipped 0.15% to $3,341.29 an ounce.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
6 minutes ago
- Reuters
Indian farmers accelerate summer crop sowing amid strong monsoon
MUMBAI, July 21 (Reuters) - Indian farmers have accelerated the planting of summer-sown crops such as paddy, soybeans, cotton and corn, following above-average monsoon rainfall in July which increased the moisture levels required for sowing, according to government data. The monsoon is the lifeblood of India's nearly $4 trillion economy, delivering almost 70% of the rainfall needed to water farms and replenish aquifers and reservoirs. Nearly half of India's farmland is not irrigated and depends on the annual June-September rains for crop growth. The country has so far received 6% more rainfall than normal since the start of monsoon season on June 1, which helped farmers to plant summer crops on 70.83 million hectares (175 million acres) by July 18, up 4.1% from the last year, according to the Ministry of Agriculture and Farmers' Welfare. Farmers have planted 17.67 million hectares with rice paddy, up 12.4% on the same period last year, as a hike in support prices prompted farmers to expand the area. India is the world's biggest exporter of rice and the top importer of edible oils such as palm oil and soyoil. Farmers planted soybean on 11.17 million hectares, down from last year's 11.9 million hectares but having accelerated in the last week. Corn was planted on 7.1 million hectares, up from 6.17 million a year earlier. The cotton area was 3.4% lower at 9.86 million hectares, having also seen an increase in the past few days, while pulses planting rose by 2.3% from a year ago to 8.2 million hectares. The farm ministry keeps updating the provisional sowing figures as it gathers more information from the state governments. Farmers are inclined to expand the area under paddy as the government buys large quantities at state-fixed support prices, which is not the case for other crops, said a Mumbai-based dealer with a global trading firm. "So far, the weather's been pretty good for crops, except in a few parts of north-eastern India. If the monsoon stays strong next month, we could be looking at a bumper harvest across the country," he said.


Reuters
6 minutes ago
- Reuters
Gold hits five-week high as dollar, yields ease; spotlight on trade
July 21 (Reuters) - Gold prices gained over 1% to hit a five-week high on Monday as the dollar and U.S. bond yields weakened amid uncertainty ahead of a U.S. deadline of August 1 for countries to strike trade deals with Washington or face more tariffs. Spot gold was up 1.5% at $3,398.23 per ounce at 12:12 p.m. ET (1612 GMT), hitting its highest since June 17. U.S. gold futures were up 1.6% at $3,410.40. The U.S. dollar index (.DXY), opens new tab was down 0.7%, making dollar-denominated gold more affordable for buyers using other currencies, while benchmark 10-year U.S. Treasury yields hit a more than one-week low. "With the August 1st deadline looming, it brings a level of uncertainty to the market and that certainly is supportive," said David Meger, director of metals trading at High Ridge Futures. The European Union is exploring a broader set of possible counter-measures against the U.S. as prospects for an acceptable trade agreement with Washington fade, according to EU diplomats. On the interest rate front, traders are pricing about a 59% chance of a rate cut by the U.S. Federal Reserve in September, according to the CME FedWatch Tool, opens new tab. U.S. Treasury Secretary Scott Bessent said the entire Federal Reserve needed to be examined as an institution. Talk of earlier-than-expected U.S. rate cuts is building, with speculation around a possible replacement of Fed Chair Jerome Powell and reshaping of the Fed adding to market jitters, Meger said. Gold is considered a hedge against uncertainty and tends to perform well in a low interest rate environment. Data showed the world's leading gold consumer, China, brought in 63 metric tons of the precious metal last month, the lowest amount since January. Its imports of platinum in June fell 6.1% from the prior month. Spot silver gained 2% to $38.94 per ounce, platinum also rose 2% to $1,449.65 and palladium was 2.3% higher at $1,269.64.


The Guardian
18 minutes ago
- The Guardian
Human-level AI is not inevitable. We have the power to change course
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Altman captures a Silicon Valley mantra: technology marches forward inexorably. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms. Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures). Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong? Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course. A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.' For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before. No technology is inevitable, not even something as tempting as AGI. Some AI worriers like to point out the times humanity resisted and restrained valuable technologies. Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s. No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts. Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans. And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.' It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements. These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.' There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts. In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production. The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913. In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round. Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere. It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet). But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy. It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed. Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want. At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'. Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile. But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody? We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing. But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources. Governments, on the other hand, aren't subject to the same financial pressures. An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run. Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand. And while the world may feel fractious, rival nations have cooperated to surprising degrees. The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'. In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty. On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion. The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it. Technology happens because people make it happen. We can choose otherwise. Garrison Lovely is a freelance journalist