logo
DeepSeek's success lies in start-up's singular focus on innovation, AI scientist says

DeepSeek's success lies in start-up's singular focus on innovation, AI scientist says

Advertisement
Yang, who serves as associate dean for global engagement at
Hong Kong Polytechnic University , said at a forum last Tuesday that the
Hangzhou -based company was relatively 'free from [commercial] product and business pressures, which are a constant concern' for Big Tech firms.
DeepSeek was purely engaged 'in developing AI large models', she said at the event 'DeepSeek and Beyond' held at PolyU.
Her assessment partly echoes the view of
Alibaba Group Holding chairman
Joe Tsai , who said in Dubai last month that DeepSeek's breakthrough in developing cheap-but-high-performing large language models (LLMs) was 'quite significant' and could inspire more AI developers to focus on open-source solutions. Alibaba owns the South China Morning Post.
LLMs are the technology underpinning
generative AI assistants like OpenAI's
ChatGPT and DeepSeek's namesake chatbot.
Advertisement
Open source gives public access to a program's source code, allowing third-party software developers to modify or share its design, fix broken links or scale up their capabilities.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI content detector: why does China dismiss it as ‘superstition tech'?
AI content detector: why does China dismiss it as ‘superstition tech'?

South China Morning Post

timea day ago

  • South China Morning Post

AI content detector: why does China dismiss it as ‘superstition tech'?

With the graduation season approaching, many Chinese universities have introduced regulations setting clear requirements for the proportion of artificial intelligence -generated content – or the 'AI rate', as it is called – in theses. Advertisement Some universities have used the AI rate as a deciding factor in whether a thesis is approved. The rule is intended to prevent academic misconduct, as educators have become increasingly concerned about the unregulated use of AI in producing scholarly literature, including data falsification and content fabrication, since the public debut of generative AI models such as ChatGPT However, an official publication of the Ministry of Science and Technology has warned that using AI content detectors to identify AI writing is essentially a form of 'technological superstition' that could cause many unintended side effects. AI detection tools could produce false results, the Science and Technology Daily said in an editorial last Tuesday, adding that some graduates had complained that content clearly written by them was labelled as AI-generated. Advertisement Even a very famous Chinese essay written 100 years ago was evaluated as more than 60 per cent AI-generated, when analysed by these tools, the article said.

DeepSeek job ads call for interns to label medical data to improve AI use in hospitals
DeepSeek job ads call for interns to label medical data to improve AI use in hospitals

South China Morning Post

time2 days ago

  • South China Morning Post

DeepSeek job ads call for interns to label medical data to improve AI use in hospitals

Chinese artificial intelligence (AI) start-up DeepSeek, which has remained mute over a release date for its R2 reasoning model, has begun recruiting interns to label medical data to improve the use of AI in hospitals. Advertisement According to recruitment ads posted on Boss Zhipin, one of China's largest hiring websites, DeepSeek is offering 500 yuan (US$70) per day for interns who can work four days a week to label medical data for applications involving 'advanced auxiliary diagnosis' tools. The jobs are based in Beijing. The intern roles are not listed on DeepSeek's official WeChat hiring channel. It is the first time that DeepSeek has publicly mentioned the need for 'medical data' in data labelling. The hiring notice on Boss said applicants should have medical backgrounds, and either be undergraduates in their fourth year or have a master's degree. As well, they need experience in using large language models (LLMs), and should be able to write Python code and write prompts for large AI models. DeepSeek did not immediately reply to a request for comment. A nurse moves a bed through a corridor at a hospital in Duan Yao autonomous county in Guangxi region, China, January 9, 2025. Photo: Reuters The move comes as Chinese hospitals embrace open-source AI models from DeepSeek to generate diagnoses and prescriptions. As of March, at least 300 hospitals in China have started using DeepSeek's LLMs in clinical diagnostics and medical decision support.

‘Godfather of AI' now fears it's unsafe and has a plan to fix it
‘Godfather of AI' now fears it's unsafe and has a plan to fix it

Asia Times

time2 days ago

  • Asia Times

‘Godfather of AI' now fears it's unsafe and has a plan to fix it

This week, the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organization dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for the groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI.' Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different levels of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team is adding to Scientist AI is the 'world model,' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Yoshua Bengio is recognized as one of the godfathers of AI. Alex Photo: Wong / Getty Images via The Conversation Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritize safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is lecturer, School of Computer Science, University of Sydney This article is republished from The Conversation under a Creative Commons license. Read the original article.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store