logo
Alibaba, Tencent freeze AI tools during high-stakes China exam

Alibaba, Tencent freeze AI tools during high-stakes China exam

Business Times6 hours ago

[BEIJING] China's most popular artificial intelligence (AI) chatbots such as Alibaba's Qwen have temporarily disabled functions including picture recognition, to prevent students from cheating during the country's annual 'gaokao' college entrance examinations.
Apps including Tencent Holdings' Yuanbao and Moonshot's Kimi suspended photo-recognition services during the hours when the multi-day exams take place across the country. Asked to explain, the chatbots responded: 'To ensure the fairness of the college entrance examinations, this function cannot be used during the test period.'
China's infamously rigorous 'gaokao' is a rite of passage for teenagers across the nation, thought to shape the futures of millions of aspiring graduates. Students – and their parents – pull out the stops for any edge they can get, from extensive private tuition to, on occasion, attempts to cheat. To minimise disruption, examiners outlaw the use of devices during the hours-long tests.
Alibaba Group Holding's Qwen and ByteDance's Doubao still offered photo recognition as at Monday (Jun 9). But when asked to answer questions about a photo of a test paper, Qwen responded that the service was temporarily frozen during exam hours from Jun 7 to 10. Doubao said the picture uploaded was 'not in compliance with rules'.
China lacks a widely adopted university application process such as in the US, where students prove their qualifications through years of academic records, along with standardised tests and personal essays. For Chinese high-school seniors, the gaokao, held in June each year, is often the only way they can impress admissions officials. About 13.4 million students are taking part in this year's exams.
The test is considered the most significant in the nation, especially for those from smaller cities and lower-income families that lack resources. A misstep may require another year in high school, or completely alter a teenager's future.
The exam is also one of the most strictly controlled in China, to prevent cheating and ensure fairness. But fast-developing AI has posed new challenges for schools and regulators. The education ministry last month released a set of regulations stating that, while schools should start cultivating AI talent at a young age, students should not use AI-generated content as answers in homework and tests. BLOOMBERG

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

College grads are lab rats in the Great AI Experiment
College grads are lab rats in the Great AI Experiment

Business Times

time2 hours ago

  • Business Times

College grads are lab rats in the Great AI Experiment

COMPANIES are eliminating the grunt work that used to train young professionals – and they don't seem to have a clear plan for what comes next. Artificial intelligence (AI) is analysing documents, writing briefing notes, creating PowerPoint presentations or handling customer service queries, and – surprise! – now the younger humans who normally do that work are struggling to find jobs. Recently, the chief executive officer of AI firm Anthropic predicted that AI would wipe out half of all entry-level white-collar jobs. The reason is simple. Companies are often advised to treat ChatGPT 'like an intern', and some are doing so at the expense of human interns. This has thrust college graduates into a painful experiment across multiple industries, but it doesn't have to be all bad. Employers must take the role of scientists, observing how AI helps and hinders their new recruits, while figuring out new ways to train them. And the young lab rats in this trial must adapt faster than the technology trying to displace them, while jumping into more advanced work. Consulting giant KPMG, for instance, is giving graduates tax work that would previously go to staff with three years of experience. Junior staff at PwC have started pitching to clients. Hedge fund Man Group tells me its junior analysts who use AI to scour research papers now have more time to formulate and test trading ideas, what the firm calls 'higher-level work'. I recently interviewed two young professionals about using AI in this way, and, perhaps not surprisingly, neither of them complained about it. One accountant who had just left university said he was using ChatGPT to pore over filings and Moody's Ratings reports, saving him hours on due diligence. Another young executive at a public relations (PR) firm, who'd graduated last year from the London School of Economics, said tools such as ChatGPT had cut down her time spent tracking press coverage from two-and-a-half hours to 15 minutes, and while her predecessors would have spent four or five hours reading forums on Reddit, that now takes her only 45 minutes. A NEWSLETTER FOR YOU Friday, 3 pm Thrive Money, career and life hacks to help young adults stay ahead of the curve. Sign Up Sign Up I'm not convinced, however, that either of these approaches is actually helping recruits learn what they need to know. The young accountant, for instance, might be saving time, but he's also missing out on the practice of spotting something fishy in raw data. How do you learn to notice red flags if you don't dig through numbers yourself? A clean summary from AI doesn't build that neural pathway in your brain. The PR worker also didn't seem to be doing 'higher-level work', but simply doing analysis more quickly. The output provided by AI is clearly useful to a junior worker's bosses, but I'm sceptical that it's giving them a deeper understanding of how a business or industry works. What's worse is that their opportunities for work are declining overall. 'We've seen a huge drop in the demand for 'entry-level' talent across a number of our client sets,' says James Callander, CEO of a Freshminds, a London recruitment firm that specialises in finding staff for consultancies. An increasing number of clients want more 'work-ready' professionals who already have a first job under their belt, he adds. That corroborates a trend flagged by venture capital firm SignalFire, whose State of Talent 2025 report pointed to what it called an 'experience paradox', where more companies post for junior roles but fill them with senior workers. The data crunchers at LinkedIn have noticed a similar trend, prompting one of its executives to claim that the bottom rung of the career ladder was breaking. Yet some young professionals seem unfazed. Last week, a University of Oxford professor asked a group of 70 executive Master of Business Administration students from the National University of Singapore if Gen Z jobs were being disproportionately eroded by AI. Some said 'no', adding that they, younger workers, were best placed to become the most valuable people in a workplace because of their strength in manipulating AI tools, recounts Dr Alex Connock, a senior fellow at Oxford's Said Business School, who specialises in the media industry and AI. The students weren't just using ChatGPT, but a range of tools such as Gemini, Claude, Firefly, HeyGen, Gamma, Higgsfield, Suno, Udio, NotebookLM and Midjourney, says Dr Connock. The lesson here for businesses is that sure, in the short term you can outsource entry-level work to AI and cut costs; but that means missing out on capturing AI-native talent. It's also dangerous to assume that giving junior staff AI tools will automatically make them more strategic. They could instead become dependent, even addicted to AI tools, and not learn business fundamentals. There are lessons here from social media. Studies show that young people who use it actively tend not to get the mental-health harms of those who use it passively. Posting and chatting on Instagram, for instance, is better than curling up on the couch and doom-scrolling for an hour. Perhaps businesses should similarly look for healthy engagement by their newer staff with AI, checking that they're using it to sense-check their own ideas and interrogating a chatbot's answers, rather than going to it for all analysis and accepting whatever the tools spit out. That could spell the difference between raising a workforce that can think strategically, and one that can't think beyond the output from an AI tool. BLOOMBERG

Xiaohongshu joins wave of Chinese firms releasing open-source AI models
Xiaohongshu joins wave of Chinese firms releasing open-source AI models

Business Times

time2 hours ago

  • Business Times

Xiaohongshu joins wave of Chinese firms releasing open-source AI models

[BEIJING] Xiaohongshu, also known as Rednote, one of the country's most popular social media platforms, has released an open-source large language model, joining a wave of Chinese tech firms making their artificial intelligence models freely available. The approach contrasts with many US tech giants like OpenAI and Google, which have kept their most advanced models proprietary, though some American firms including Meta have also released open-source models. Open sourcing allows Chinese companies to demonstrate their technological capabilities, build developer communities and spread influence globally at a time when the US has sought to stymie China's tech progress with export restrictions on advanced semiconductors. Xiaohongshu's model, called is available for download on developer platform Hugging Face. A company technical paper describing it was uploaded on Friday (Jun 6). In coding tasks, the model performs comparably to Alibaba's Qwen 2.5 series, though it trails more advanced models such as DeepSeek-V3, the technical paper said. Xiaohongshu is an Instagram-like platform where users share photos, videos, text posts and live streams. The platform gained international attention earlier this year when some US users flocked to the app amid concerns over a potential TikTok ban. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up The company has invested in large language model development since 2023, not long after OpenAI's release of ChatGPT in late 2022. It has accelerated its AI efforts in recent months, launching Diandian, an AI-powered search application that helps users find content on Xiaohongshu's main platform. Other companies that are pursuing an open-source approach include Alibaba which launched Qwen 3, an upgraded version of its model in April. Earlier this year, startup DeepSeek released its low-cost R1 model as open-source software, shaking up the global AI industry due to its competitive performance despite being developed at a fraction of the cost of Western rivals. REUTERS

UK financial regulator partners with Nvidia in AI 'sandbox'
UK financial regulator partners with Nvidia in AI 'sandbox'

CNA

time3 hours ago

  • CNA

UK financial regulator partners with Nvidia in AI 'sandbox'

LONDON -Financial firms in Britain will be able to test artificial intelligence tools later this year in a regulatory "sandbox" launched on Monday by the country's financial watchdog, part of a broader government strategy to support innovation and economic growth. The Financial Conduct Authority (FCA) has partnered with U.S. chipmaker Nvidia to provide access to advanced computing power and bespoke AI software through what it calls a "Supercharged Sandbox." A sandbox refers to a controlled environment where companies can test new ideas such as products, services or technologies. The programme is intended to help firms in the early stages of exploring AI, offering access to technical expertise, better datasets and regulatory support, the FCA said. It is open to all financial services companies experimenting with AI. "This collaboration will help those that want to test AI ideas but who lack the capabilities to do so," Jessica Rusu, the FCA's chief data, information and intelligence officer, said. "We'll help firms harness AI to benefit our markets and consumers, while supporting economic growth." Finance minister Rachel Reeves has urged Britain's regulators to remove barriers to economic growth, describing it as an "absolute top priority" for the government. In April, she said she was pleased with how the FCA and the Prudential Regulation Authority, part of the Bank of England, were responding to her call to cut red tape. Nvidia said the initiative would allow firms to explore AI-powered innovations in a secure environment, using its accelerated computing platform. "AI is fundamentally reshaping the financial sector," said Jochen Papenbrock, EMEA head of financial technology at Nvidia, citing improvements in data analysis, automation and risk management. He added that the sandbox will provide firms with a "secure environment to explore AI innovations using Nvidia's full-stack accelerated computing platform, supporting industry-wide growth and efficiency." The testing is set to begin in October.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store