logo
Chinese retail traders embrace DeepSeek in a nod to quants

Chinese retail traders embrace DeepSeek in a nod to quants

Reuters11-03-2025

Summary
Classes, online courses mushroom to help traders use AI
DeepSeek changes public perception of Chinese quant funds
Brokerages, wealth managers embrace AI to satisfy retail clients
SHANGHAI/SINGAPORE, March 11 (Reuters) - If you cannot fight them, join them, is the mantra among Chinese mom-and-pop investors who are embracing DeepSeek and other artificial intelligence tools, in sharp contrast with last year's government crackdown on computer-driven quantitative traders.
Online crash courses have mushroomed and training rooms are packed with retail traders eager to beat the market using computer models, as the popularity of DeepSeek - itself backed by a quant fund - changed not only the market trajectory, but the perception of China's $700 billion hedge fund industry.
The rapid adoption of DeepSeek in China's retail-dominated stock market is also prompting changes at brokerages and wealth managers, while creating new risks for investors in a market dominated and driven by small-time traders' cash flow.
"The future is the digital age, and AI will be vital," Hong Yangjun told a packed room of individual investors learning to trade with AI on a weekend in February.
Just as future warfare will be fought with drones and robots, the stock market will be a battleground between computers, the lecturers told the class in an office in downtown Shanghai.
Such piety is in stark contrast to the public outcry a year ago against computer-driven quant funds, viewed as "bloodsuckers" by retail investors, and blamed by regulators for contributing to market unfairness and volatility.
The industry was also the target of a government crackdown roughly a year ago, when the sector was worth $260 billion by some estimates.
Last month, however, investors handed over 15,800 yuan ($2,179.91) each for a weekend lecture by Mao Yuchun, founder of Alpha Squared Capital, on how to trade stocks with AI, according to the organiser, who promoted the event by drawing attention to Alpha Squared's geographical affinity with High-Flyer.
High-Flyer, based in eastern Hangzhou, is the hedge fund behind DeepSeek - the Chinese AI start up that stunned Silicon Valley with its cost-efficient large language model and spurred a rally in Chinese stocks.
Meanwhile, Chinese social media is brimming with online courses teaching traders how to use DeepSeek to evaluate companies, pick stocks, and code trading strategies.
"Using quantitative tools to pick stocks saves a lot of time," said Wen Hao, a Hangzhou-based trader.
"You can also use DeepSeek to write codes," said Wen, who uses computer programs to determine the timing for buying and selling.
U.S. fund giants including BlackRock, Renaissance Technologies and Two Sigma have already been using AI in investing for some time. Analysts say small asset managers and even retail investors in China stand to benefit from the emergence of DeepSeek's open-sourced model.
ChatGPT is off-limits to Chinese users.
DEEPSEEK ADVICE
The fondness for the AI-led turnaround in the perception of quant trading has coincided with a sunny start to the year for stocks, after a few years in the doldrums.
Goldman Sachs said the MSCI China index (.dMICN00000PUS), opens new tab has made its best start of the year in history and brokers are racing to build AI models into their platforms.
"In the future, Chinese investors will completely change the way they make investment decisions and place orders," said Zhou Lefeng, president of Xiangcai Securities.
"Previously, clients would ask wealth managers for investment advice. Now they ask DeepSeek."
Larry Cao, principal analyst at FinAI Research, said DeepSeek is popular because it's cost-efficient, has strong reasoning ability, and unlike ChatGPT, is readily available, and is promoted by the Chinese government.
Nevertheless, he is surprised at the level of faith investors put in the model, cautioning that AI has limits.
"People trust AI models more than they trust financial advisers, which is probably misplaced trust at least at this stage," Cao said.
There could also be a herding effect if one school trains many retail investors to trade using the same signal.
"Large language models seem impressive. But at this stage, they are not necessarily smarter than most investors."
What's certain, said Feng Ji, CEO of Baiont Quant, is that DeepSeek has changed retail perception of quant fund managers.
"I can feel strongly that the public are thinking twice about quant fund managers' contributions to society," said Feng, whose company uses machine learning to trade.
"I never think we caused retail investors' losses. We actually provide liquidity and make the market more efficient."
($1 = 7.2480 Chinese yuan renminbi)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks
Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

The Guardian

time6 hours ago

  • The Guardian

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments following a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was 'considering the concerns' raised by the letter following a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Dame Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a 'retrograde and highly alarming step'. 'We urge you to publicly assert that risk assessments will not normally be considered as 'suitable and sufficient', the standard required by … the Act, where these have been wholly or predominantly produced through automation.' The letter also urged the watchdog to 'challenge any assumption that platforms can choose to water down their risk assessment processes'. A spokesperson for Ofcom said: 'We've been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.' Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Meta said the letter deliberately misstated the company's approach on safety and it was committed to high standards and complying with regulations. 'We are not using AI to make decisions about risk,' said a Meta spokesperson. 'Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.' The Molly Rose Foundation organised the letter after NPR, a US broadcaster, reported last month that updates to Meta's algorithms and new safety features will mostly be approved by an AI system and no longer scrutinised by staffers. According to one former Meta executive, who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but would create 'higher risks' for users, because potential problems are less likely to be prevented before a new product is released to the public. NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods.

Minister says AI ‘does lie' but defends Government amid copyright row
Minister says AI ‘does lie' but defends Government amid copyright row

South Wales Guardian

time6 hours ago

  • South Wales Guardian

Minister says AI ‘does lie' but defends Government amid copyright row

Peter Kyle acknowledged the technology was 'not flawless' as he insisted the Government would 'never sell downstream' the rights of artists in the UK. He also said he had 'mistakenly' said his preferred option on AI and copyright was requiring rights-holders to 'opt out' of their material being used by tech companies, and had since 'gone back to the drawing board'. Ministers have faced a backlash from major figures in the creative industries over their approach to copyright, with Sir Elton John this week describing the situation as an 'existential issue.' The Government is locked in a standoff with the House of Lords, which has demanded artists to be offered immediate copyright protection as an amendment to the Data (Use and Access) Bill. Peers have attempted to change the legislation by adding a commitment to introduce transparency requirements aimed at ensuring rights-holders are able to see when their work has been used and by whom. Asked about the risk of AI producing unreliable information, Mr Kyle said 'people need to understand that AI is not flawless, and that AI does lie because it's based on human characteristics'. 'Now it is getting more precise as we move forward. It's getting more powerful as we move forward,' he told Sky News's Sunday Morning With Trevor Phillips. 'But as with every single technology that comes into society, you can only safely use it and wisely use it by understanding how it works.' He added: 'We are going to legislate for AI going forward and we're going to balance it with the same legislation that we'll bring in to modernise the copyright legislation as well.' The Government has said it will address copyright issues as a whole after the more than 11,500 responses to its consultation on the impact of AI have been reviewed, rather than in what it has branded 'piecemeal' legislation. Among the proposals had been a suggestion that tech companies could be given free access to British music, films, books in order to train AI models without permission or payment, with artists required to 'opt-out' if they do not want their work to be used. Asked about the prospect of an opt-out clause, Mr Kyle told the BBC's Sunday With Laura Kuenssberg programme: 'I always had on the table from the outset an opt-out clause. 'But I mistakenly said this was my preferred option that had more prominence than perhaps some of the creatives wanted it to have, and I've now sort of gone back to the drawing board on that, because I am listening to what people want.' Last month hundreds of stars including Sir Elton, Sir Paul McCartney and Kate Bush signed a joint letter to Sir Keir Starmer urging the Prime Minister to introduce safeguards against work being plundered for free.

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says
US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The Guardian

time6 hours ago

  • The Guardian

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The US administration's targeting of academic research and international students is a 'great gift' to China in the race to compete on artificial intelligence, former OpenAI board member Helen Toner has said. The director of strategy at Georgetown's Center for Security and Emerging Technology (CSET) joined the board of OpenAI in 2021 after a career studying AI and the relationship between the United States and China. Toner, a 33-year-old University of Melbourne graduate, was on the board for two years until a falling out with founder Sam Altman in 2023. Altman was fired by the board over claims that he was not 'consistently candid' in his communications and the board did not have confidence in Altman's ability to lead. The chaotic months that followed saw Altman fired and then re-hired with three members of the board, including Toner, ousted instead. They will soon also be the subject of a planned film, with the director of Challengers and Call Me By Your Name, Luca Guadagnino, reportedly in talks to direct. The saga, according to Time magazine – which named her one of the Top 100 most influential people on AI in 2024 – resulted in the Australian having 'the ear of policymakers around the world trying to regulate AI'. At CSET, Toner has a team of 60 people working on AI research for white papers or briefing policymakers focused on the use of AI in the military, workforce, biosecurity and cybersecurity sectors. 'A lot of my work focuses on some combination of AI, safety and security issues, the Chinese AI ecosystem and also what gets called frontier AI,' Toner said. Toner said the United States is concerned about losing the AI race to China and while US chip export controls make it harder for China to get compute power to compete with the US, the country was still making a 'serious push' on AI, as highlighted by the surprise success of Chinese generative AI model DeepSeek earlier this year. The Trump administration's attacks on research and bans on international students are a 'gift' to China in the AI race with the US, Toner said. 'Certainly it's a great gift to [China] the way that the US is currently attacking scientific research, and foreign talent – which is a huge proportion of the USA workforce – is immigrants, many of them coming from China,' she said. Sign up for Guardian Australia's breaking news email 'That is a big … boon to China in terms of competing with the US.' The AI boom has led to claims and concerns about a job wipeout caused by companies using AI to replace work that had otherwise been done by humans. Dario Amodei, the CEO of Anthropic, the company behind the generative AI model Claude, told Axios last week that AI could reduce entry-level white-collar jobs by 50% and result in 20% unemployment in the next five years. Toner said Amodei 'often says things that seem directionally right to me, but in terms of … timeline and numbers often seem quite aggressive' but added that disruption in the jobs market had already started to show. 'The kind of things that [language model-based AI] can do best at the moment … if you can give them a bite-size task – not a really long term project, but something that you might not need ages and ages to do and something where you still need human review,' she said. 'That's a lot of the sort of work that you give to interns or new grads in white-collar industries.' Experts have suggested companies that invested heavily in AI are now being pressed to show the results of that investment. Toner said while the real-world use of AI can generate a lot of value, it is less clear what business models and which players will benefit from that value. Dominant uses might be a mix of different AI services plugged into existing applications – like phone keyboards that can now transcribe voices – as well as stand-alone chatbots, but it's 'up in the air' which type of AI would actually dominate, she said. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion Turner said the push for profitability was less risky than the overall race to be first in AI advancements. 'It means that these companies are all making it up as they go along and figuring out as they go how to make trade-offs between getting products out the door, doing extra testing, putting in extra guardrails, putting in measures that are supposed to make the model more safe but also make it more annoying to use,' she said. 'They're figuring that all out on the fly, and … they're making those decisions while under pressure to go as fast as they can.' Turrner said she was worried about the idea of 'gradual disempowerment to AI' – 'meaning a world where we just gradually hand over more control over different parts of society and the economy and government to AI systems, and then realise a bit too late that it's not going the way that we wanted, but we can't really turn back'. She is most optimistic about AI's use in improving science and drug discovery and for self-driving services like Waymo in reducing fatalities on the roads. 'With AI, you never want to be looking for making the AI perfect, you want it to be better than the alternative. And when it comes to cars, the alternative is thousands of people dying per year. 'If you can improve on that, that's amazing. You're saving many, many people.' Toner joked that her friends had been sending her options on who might play her in the film. 'Any of the names that friends of mine have thrown my way are all these incredibly beautiful actresses,' she said. 'So I'll take any of those, whoever they choose.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store