logo
Wall Street drops as tech selloff persists, European shares steady

Wall Street drops as tech selloff persists, European shares steady

CNA7 hours ago
MUMBAI :Shares on Wall Street dropped for a second successive day on Wednesday as weakness in the tech sector persisted while a key meeting of central bankers later this week remained in focus for currency and rates traders.
The S&P 500 declined 0.8 per cent and the tech-heavy Nasdaq Composite dropped nearly 1.5 per cent in early trade, as the pressure persisted after a steep fall on Tuesday. The Dow Jones Industrial Average was down 0.2 per cent.
Analysts pointed to a confluence of factors behind weakness in tech stocks, including concerns over steep valuations, investors exiting profitable positions and a general mood of risk aversion.
"I think we were priced for perfection in the U.S. and there was quite a lot of complacency in markets, so some summer volatility should have been expected," said Ben Laidler, head of equity strategy at Bradesco BBI.
Wariness over U.S. President Donald Trump's growing influence over the sector has also been in focus for investors. U.S. Commerce Secretary Howard Lutnick is looking into the government taking equity stakes in Intel as well as other chip companies, two sources told Reuters.
The potential moves follow other unusual revenue-sharing deals Washington has recently struck with U.S. companies, including AI chip giant Nvidia and Advanced Micro Devices.
While the individual developments may be brushed aside by markets, they fall into the broader bucket of concerns over the institutional framework in the United States, Laidler said.
European shares managed to hold on to their gains from earlier in the day, and the pan-European STOXX 600 index was up 0.2 per cent. Britain's FTSE 100 climbed to a record high, boosted by gains in consumer and healthcare companies.
FOCUS ON JACKSON HOLE
The U.S. dollar weakened slightly against a basket of peers after Trump called on Federal Reserve Governor Lisa Cook to resign.
The 10-year U.S. Treasury yield was flat at 3.30 per cent, while the 2-year Treasury yield slipped to 3.74 per cent.
The focus is now on The Kansas City Federal Reserve's August 21-23 Jackson Hole symposium, where Fed Chair Jerome Powell is due to speak on the economic outlook and the central bank's policy framework on Friday.
Powell's remarks on the near-term outlook for rates will be keenly watched as traders are almost fully pricing in a rate cut next month.
"Even if Federal Reserve Chair Jerome Powell emphasises muted unemployment over sharply revised payrolls, that would be a hard sell to both the White House and a market that is pricing in 21bp of rate cuts for September," analysts at ING said in a note.
The minutes of the Fed's July policy meeting were due later on Wednesday, but were unlikely to spur meaningful market reactions as they pre-date weak U.S. labour market data that spurred a firming of rate cut expectations.
Elsewhere, Sweden's central bank kept its key interest rate on hold as expected, while the Reserve Bank of New Zealand cut policy rates to a three-year low and signalled further easing, sending the kiwi down by more than 1 per cent.
Consumer prices in Britain climbed by 3.8 per cent in July, data showed, the fastest annual rise for a Group of Seven economy.
The data nudged sterling higher but it quickly pared gains, while the fact that the print was not even higher prompted a rally in government bonds. The benchmark 10-year gilt yield was last down 6 basis points at 4.69 per cent.
In commodities, Brent crude futures were last up 1 per cent at $66.5 a barrel as investors awaited the next steps in talks to end Russia's war on Ukraine, with uncertainty over whether oil sanctions might be eased or tightened.
Spot gold rose 0.8 per cent to $3,343.89 an ounce.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Commentary: AI can boost Singapore's productivity, but let's not lose the point of work
Commentary: AI can boost Singapore's productivity, but let's not lose the point of work

CNA

time20 minutes ago

  • CNA

Commentary: AI can boost Singapore's productivity, but let's not lose the point of work

SINGAPORE: If there's one message that keeps coming up in Prime Minister Lawrence Wong's recent speeches, it's that Singapore can't afford to sit on the sidelines as artificial intelligence (AI) reshapes the global economy. Singapore needs to 'think harder' about how it can help every company ' make full use of AI ', he said at a conference last month hosted by the Institute of Policy Studies (IPS) and Singapore Business Federation (SBF). Then on Sunday (Aug 17), during his National Day Rally (NDR) speech, Mr Wong described AI as 'a defining technology of our time', likening its impact to the computer and the internet. 'We will equip and empower every enterprise, especially our SMEs – to harness AI effectively, and sharpen their competitive edge,' he added. In total, Mr Wong mentioned AI around 40 times in his speech, as he outlined Singapore's vision to boost productivity through widespread adoption of the technology. While the ambition is clear, the challenge lies in how it will be implemented in the workplace. A recent multi-country study found that only 19 per cent of firms in Singapore have adopted artificial intelligence or machine learning tools. That means there's significant room for future adoption. But if the implementation isn't handled thoughtfully, AI risks stripping work of its meaning and purpose, reducing our sense of mastery and leaving us feeling like cogs in a machine. The central question for Singapore's AI future, therefore, shouldn't be about using the latest AI tools. What truly matters is how we integrate AI thoughtfully into our personal and professional lives, so that AI enhances – rather than erodes – the meaning and purpose we derive from our work. A CAMERA DOES NOT MAKE SOMEONE A PHOTOGRAPHER When we hear 'AI adoption' in the name of productivity, the first thing that comes to mind is automation – technology doing things faster and cheaper. That sounds great (think of the cost savings this would bring!). But is there a risk of automation without reflection? Of mindless automation? We wouldn't want to create a situation where workers are reduced to passive overseers of machines, there just to monitor and push the stop button if anything goes wrong. Such work can be very disengaging, and we would lose the opportunities to practise and hone our skills. We are already seeing this in places with poor outsourcing practices, where staff grow over-reliant on external companies to do the work for them, such that it weakens decision-making and detaches staff from the work they are responsible for. Far from empowering, such over-dependency leaves workers less confident, less capable, and less in control. The point is this: What we lose from automation is not a technology problem – it is fundamentally a problem rooted in mindsets. A pencil does not make one a writer or an artist; and a camera does not make one a photographer. What matters is whether we see the tool merely as an instrument of utility or as an extension of our creativity and judgment. AI is no different. Too often, people want to use it as a crutch to 'do everything for them'. But AI makes mistakes, just like humans. And yet we are prone to defer our own judgments to computer systems – even in trivial matters. As an example, I once tried to order bubble tea without ice. The cashier refused. Not because it was hard to do, but because 'the system' wouldn't allow it. All she had to do was not put ice into the cup. Instead, she refused because 'the system' (i.e. the cash register) would not allow her to key in the order for an iceless drink. This might seem trivial (and bizarre), but it tells us how easily we humans are willing to surrender our agency to a machine. WHAT IS AGENCY? Agency, at its core, is the psychological conviction that we have the freedom and the ability to shape our environment. Those with a weakened sense of agency often feel they are not in a position to change anything, or that nothing can be changed, and thus resign themselves to the status quo. The philosopher Georg Wilhelm Friedrich Hegel described work as a dialogue between the human mind and the world. A sculptor imprints ideas from her mind onto the clay by working with it, by shaping it with her hands. And by working with the clay – pressing, pulling, reshaping – the clay shapes her mind by revealing more about its properties and what she can and cannot do with it. It is through work that we gain a clear understanding of what we can and cannot do. It equips us with the necessary practical knowledge to effectively anticipate challenges, solve problems and craft strong strategies. Most importantly, it allows us to recognise and seize opportunities for innovation. In essence, work fosters mastery, builds confidence, and ultimately grants us freedom – the very foundation of human agency. The challenge, then, is not to let AI sever this dialogue. To use AI meaningfully to enhance our productivity, we must continue to stay 'in the loop' – to be engaged, questioning and reflective. This means being engaged in the things we are responsible for by resisting the temptation of simply accepting AI-generated answers uncritically. It requires having the curiosity to probe, the responsibility to stay engaged and understand what is really happening on the ground, and the courage to challenge the AI's answers when our instincts say otherwise. TECHNOLOGY SHOULD MAKE US BETTER, NOT JUST FASTER At the end of the day, AI is made in our image and likeness – it is fallible, limited and prone to error. It becomes 'superior' only when we surrender our agency to it. Without this fundamental mindset shift, we risk using AI to detach from our work, and to empty out meaning and purpose in what we do. As long as we remain engaged – actively steering, questioning and shaping what we do alongside AI – then we can be assured that these tools will amplify our human potential and creativity, rather than diminish it. This is very much how an artist is able to create beautiful works of art with a mere pencil.

Commentary: Nvidia's icy reception in China is buying time for Huawei
Commentary: Nvidia's icy reception in China is buying time for Huawei

CNA

time20 minutes ago

  • CNA

Commentary: Nvidia's icy reception in China is buying time for Huawei

TOKYO: Don't be fooled by China's icy response to America's policy reversal that will allow a key Nvidia artificial intelligence chip back on the mainland. The country's AI ambitions currently rely on Nvidia's hardware, and authorities know that – even if they won't admit it. But by fanning fears of alleged security or environmental concerns, they're buying time for Huawei to catch up while keeping trade talks pressure on the US. CEO Jensen Huang was greeted with fanfare by industry leaders in Beijing last month after news broke that the Trump administration will allow the sale of H20 chips to resume. It seemed like China got what it wanted: Loosening export controls designed to hold back its AI sector has been a key sticking point during tariff negotiations. Yet in the weeks since the announcement, cyber authorities have summoned Nvidia to discuss alleged security risks related to the H20s, state media warned of potential backdoors that could cause a 'nightmare', and the government urged local companies to avoid using the much sought-after processers for AI development. When asked about Beijing's unexpected reaction, US Treasury Secretary Scott Bessent told Bloomberg TV that it 'tells me that they are worried about the Nvidia chips becoming the standard in China'. This is an optimistic and simplistic take. It's too soon for Washington to be celebrating over this feigned angst. CHINESE COMPANIES UNLIKELY TO STOP BUYING H20S Nvidia's tech stack is already, overwhelmingly, the standard in the nation's AI sector. There's a reason that giants from Bytedance to Alibaba stockpiled billions of dollars' worth of orders ahead of the now-reversed ban. Similarly, it seems a deliberate move that, despite all the talk of lurking threats, China hasn't issued an outright ban itself. While these warnings have drawn a lot of attention, they likely won't be enough to deter companies eager to power their AI ambitions to stop buying H20s. While a Communist Party mouthpiece did appear to blast alleged 'backdoors' in these chips, and many Western news outlets ran with that headline, the reality is more nuanced. The made-to-go viral editorial in a People's Daily WeChat account was far from an official rebuke, according to an analysis from the China Media Project. Instead, it was meant to make Nvidia 'squirm'. It worked. The Santa Clara-based chipmaker responded with a public denial of breaches and argued that adding any in the future would be 'an open invitation for disaster'. It's true, as I've written before, that Beijing would very much prefer its AI industry to use offerings from Huawei instead of Nvidia. But the domestic alternatives aren't ready for primetime – both in terms of performance and the quantity that can be produced. Domestic AI champion DeepSeek was forced to delay the release of its new model because it was trying to train it on Huawei's hardware instead of Nvidia's, the Financial Times reported last week. But even with a team of Huawei engineers on-site, they couldn't get it to work. In an apparent compromise, DeepSeek is using Nvidia for training the model and Huawei for inference (the phase that involves running and deploying AI). It would be foolish for regulators to arrest DeepSeek's momentum by not allowing it to use any US computing power at all. TRUMP'S TRANSACTIONAL APPROACH The most unusual aspect of this is still President Donald Trump's announcement that Nvidia will pay the US 15 per cent of its revenue for AI chip sales on the mainland. It's not hard to imagine the global backlash if such a pay-for-play deal had been set up by the other side. But it also reiterates Trump's transactional approach to these national security concerns. This isn't lost in Beijing, especially at a time when the tariff truce has been further extended. Beijing may be putting on a show that it doesn't want America's chips, but it's really just building a bridge now until the domestic alternatives are ready. There are signs that this moment is approaching: Companies like buzzy startup iFlytek claim to have trained their models entirely with Huawei processers. Still, most Chinese businesses much prefer Nvidia's, in large part because of its supporting software system. Encouraging developers to build on top of Huawei's rival platform over time is what will help improve it enough to eventually force a broader ecosystem shift.

Can people tell a real voice from an AI-generated one? We put it to the test
Can people tell a real voice from an AI-generated one? We put it to the test

CNA

time20 minutes ago

  • CNA

Can people tell a real voice from an AI-generated one? We put it to the test

SINGAPORE: 'Hi, how have you been? So I heard about this restaurant that just recently opened up. Want to go check it out the next time we meet?' These were the innocuous sentences used in a simple experiment to find out if people in my social circle could distinguish my real voice from an AI-cloned version. The result was some confusion - but more importantly, the ease with which the imitation was generated suggests that more attention should be paid to the phenomenon of deepfake voice phishing, or vishing. Millions of dollars have been lost to scammers using cheap yet increasingly sophisticated artificial intelligence tools to impersonate the voices of real people. In Asia-Pacific, the trend of AI and deepfake-enabled fraud is accelerating even faster than the rest of the world, according to cybersecurity firm Group-IB. AI-related fraud attempts surged by 194 per cent in 2024 compared to 2023, with deepfake vishing emerging as one of the most commonly used methods, said the company's senior fraud analyst for the region Yuan Huang. An increase has also been observed in Singapore. While exact vishing figures are not publicly available, Ms Huang pointed to a study which found that about 56 per cent of businesses here have experienced deepfake audio fraud. The Cyber Security Agency of Singapore told CNA that audio deepfakes are a rising concern, due to how they exploit people's natural inclination to trust familiar voices, such as those seemingly from a family member or a colleague. COPYING MY VOICE My experiment, which was facilitated by Ms Huang from Group-IB at the start of August, showed that cloning a voice really did not take much: A publicly available online tool – of which there are several – and samples of my voice as short as 10 seconds. A more convincing replication would require longer samples and more tone variations. The subscription plan for the platform we chose was priced at less than US$10 for the first month. Elsewhere, fees can start at US$3 a month. Under our plan, we could generate high-definition audio output and cloned voices in other languages. Seconds after uploading my samples, my deepfaked voice was ready. On the user-friendly interface, cloned voices can be fine-tuned by adjusting speeds or tweaking settings to be more monotonous or more dramatic. None of this required any particular technical skills; practically anyone can generate a cloned voice. When I finally played back my AI-generated voice in full, I was taken by surprise. I already knew it was supposed to sound like me. But the level of similarity and accuracy – down to the pauses I habitually take in between words – was something else. The technology was not entirely flawless. The voice clone seemed to have a hint of an American accent; and the more we replayed it, the less it sounded like me. But it would suffice to fool most people, according to studies at least. A 2024 poll by the University of California in the United States found that participants mistook the identity of an AI voice for its real counterpart 80 per cent of the time. When researchers at the University College London played audio clips to 100 people, just 48 per cent were able to tell which was a human voice and which was created using AI. A recent public awareness survey by CSA found that a majority (78 per cent) had confidence in their ability to identify deepfakes. But only one in four could distinguish legitimate videos from AI-manipulated ones. No data or trends specific to audio content were available. In an April 2024 reply to a parliamentary question, it was also revealed that the Singapore police has not been tracking the number of deepfake-related scams using video, voice and other media. Home Affairs Minister K Shanmugam said then: "While we have received some reports where the complainants had alleged that deepfake techniques were used by the scammers, the number is not high." PUT TO THE TEST After the cloning came the fun part of the experiment. We typed out the statement on checking out a restaurant - something I would say to friends - on the online platform; then separately recorded myself saying the same thing. Then I called four contacts - two colleagues as well as two friends I've known for almost a decade - and played them the two audio clips. One of my colleagues initially mistook my real voice for AI. But in the end, all four could tell the difference: According to them, no amount of tweaking of settings could take away the more robotic and monotonous flavour of the cloned voice. One possible factor could be that the Singaporean accent is not as easy to clone, since most AI models are trained on American or British accents, said Associate Professor Terence Sim from the National University of Singapore's (NUS) school of computing. My friends were also young adults in their 20s, who may be more aware of the hallmarks of AI use - and on higher alert in the environment of phone calls. THE VISHING APPEAL Experts CNA spoke to nonetheless noted the growing accessibility of low-cost and sometimes even free generative AI tools to convincingly mimic and create human voices. A CSA spokesperson said that with audio deepfakes, "while there are traditional indicators like unnatural pauses in speech, robotic intonation, or unusual word choices, these tells are becoming harder to spot as AI technology advances". "What makes audio deepfakes particularly challenging to detect is that humans are naturally attuned to trust voice communications, especially in urgent or emotional situations," he added. "When someone appears to sound exactly like a family member or colleague in distress, the emotional response often overrides our usual caution." The AI tools also allow scammers to spoof trusted institutions with accuracy, making it easier to manipulate targets and gain financial and sensitive information, said Assistant Professor Saifuddin Ahmed from the Wee Kim Wee School of Communication and Information at Nanyang Technological University (NTU). "Combined with the availability of stolen personal data from breaches or user mistakes, the technology empowers scammers to craft highly convincing, personalised attacks." Group-IB's Ms Huang also highlighted that social media is often the primary source for obtaining voice samples. Other avenues include radio or TV interviews, webinars and public recordings available online. "In some cases, scammers record a victim's voice during a phone call, particularly during impersonation or vishing attempts, to later use it for further attacks." Scammers, being financially motivated, prefer targets who can potentially bring such high returns, Ms Huang said. This includes chief executives, chief financial officers and finance personnel. But they can also target the elderly, particularly those unfamiliar with AI-generated voice technology, or those who can be emotionally manipulated. Prominent vishing cases In 2019, a CEO of an energy company based in the United Kingdom was convinced he was on the phone with his boss from their German parent company. The CEO said he recognised a subtle German accent and that it even carried the "melody" of his boss' voice. He ended up transferring US$243,000 to a fraudster who had used AI to spoof the German boss. The call had been made from an Austrian phone number, and the money was moved from a Hungarian bank account to one in Mexico before being spread out to other locations. Earlier in 2025, a Hong Kong merchant lost about US$18.5 million in cryptocurrency to scammers impersonating a financial manager of a Chinese company over WhatsApp. The merchant, who wanted to buy cryptocurrency mining equipment, communicated with who he thought was the financial manager over WhatsApp. He even received voice messages from the "manager" during purchase negotiations. While these cases happened outside of Singapore, similar tactics have been observed in this region, especially targeting finance departments and C-level executives in trusted industries, said Group-IB's Ms Huang. WHAT TO DO ABOUT IT Attempting to regulate online voice cloning platforms would be a complex matter, with almost all of them hosted outside of Singapore's jurisdiction, experts told CNA. Asst Prof Saifuddin from NTU said cooperation between governments and tech companies would be needed to set boundaries to safeguard users. At an industry level, telecom providers can also implement advanced call authentication and improve their spam-blocking technologies, he said. Asst Prof Saifuddin's advice to individuals was to use a different communication channel where possible, such as text or email, to confirm the request. "Never rely solely on the incoming call for verification," he said. NUS' Assoc Prof Sim pointed to skills that could be picked up, such as how to listen out for hissing, mismatched background sounds in different segments of the speech, a lack of background sound, or abrupt transitions from one sentence to the next. But he also acknowledged that these were not foolproof, with voice cloning technology constantly improving. CSA said to be on alert when receiving urgent and unsolicited requests for monetary transfers or sensitive details such as passwords and banking credentials. "For voice calls from a supposed friend or family member, ask questions that only they would know ... If in doubt, do hang up and call the friend or family member directly on their known number," said the agency's spokesperson. In the case of callers claiming to be from institutions, dial their official hotline to verify. "A healthy dose of scepticism is important in today's digital world," said CSA. "Not everything we see or hear online is what it appears to be, and if it's too good to be true, then it probably is." That was also my takeaway from the experiment, topped off with a dose of wariness. The next time I get a call from an unknown number, I won't be the first to speak.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store