logo
Chinese sex doll maker sees jump in 2025 sales as AI improves user experience

Chinese sex doll maker sees jump in 2025 sales as AI improves user experience

Published: 7:00am, 16 Feb 2025 WMDoll , one of China's biggest sex doll makers, expects to record a 30 per cent jump in sales this year, as the company's adoption of open-source generative artificial intelligence (AI) models helped improve the user experience of its products. Feedback received by WMDoll has been generally good after integrating large language models (LLMs) – the technology underpinning generative AI services like ChatGPT – into its new anthropomorphic sex toys, according to founder and chief executive Liu Jiangxia.
'It makes the dolls more responsive and interactive, which offers users a better experience,' Liu said in a recent interview with the South China Morning Post. WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company's latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy. According to the company, it has adopted several open-source LLMs, including Meta Platforms ' Llama AI models, which can be fine-tuned and deployed anywhere. Sex dolls' heads are sorted at an assembly line inside WMDoll's factory in Zhongshan. Photo: Thomas Yau
The new generation of sex dolls – which are still supported by a metal skeleton and have either a silicone or thermoplastic elastomer exterior – reflects the open-minded approach to open-source technology by this sector to improve customer satisfaction. By comparison, traditional sex dolls are limited to simple responses and lack the expressive capabilities needed to engage with a human.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hong Kong police to take down fake news report about Andy Lau resembling SCMP
Hong Kong police to take down fake news report about Andy Lau resembling SCMP

South China Morning Post

time8 hours ago

  • South China Morning Post

Hong Kong police to take down fake news report about Andy Lau resembling SCMP

Police will take down a fake news report purporting to be from the South China Morning Post about superstar actor Andy Lau Tak-wah being sued by the Hong Kong Monetary Authority over his investment advice, the Post has learned. A police source said on Friday that the force's cyber security and technology crime bureau was handling the case and would take down the fabricated report as soon as possible. The website was designed to appear like the Post's. The fake article claimed the Monetary Authority had sued Lau over statements he made during a live broadcast, in which he shared his tips on becoming rich through a cryptocurrency trading platform, with a deposit of HK$2,000 (US$254) generating a million dollars in months. 'Give me 2,000 HKD, and with the Immediate FastX platform I'll make a million in 12 to 15 weeks!' Lau was quoted as saying in the article. 'This platform is the perfect solution for those who want to get rich quick. It's built on self-learning artificial intelligence, which exchanges cryptocurrencies for you.'

Alibaba unveils new open-source AI embedding models, a field it leads globally
Alibaba unveils new open-source AI embedding models, a field it leads globally

South China Morning Post

time13 hours ago

  • South China Morning Post

Alibaba unveils new open-source AI embedding models, a field it leads globally

Alibaba Group Holding has made its Qwen3 Embedding series available for developers, in the Chinese tech giant's latest bid to solidify its global leadership in open-source artificial intelligence (AI) models. Advertisement Released late on Thursday, the series marks another addition to the company's line-up of large language models (LLMs), which are among the world's most popular open-source AI systems , according to New York-based computer app company Hugging Face. Alibaba, owner of the South China Morning Post, ranks third globally in the field of LLMs, according to the 2025 AI Index Report from Stanford University. The new models, which come in various parameters, 'support over 100 languages, including multiple programming languages, and provide robust multilingual, cross-lingual and code retrieval capabilities', according to Alibaba. 11:13 How is betting on AI to transform e-commerce How is betting on AI to transform e-commerce In AI, an embedding model helps computers understand and process text by turning it into numerical representations. Since computers process data solely in numerical form, the embedding process enables them to grasp semantic data and questions more effectively, delivering more tailored results that do not rely solely on keywords.

‘Godfather of AI' now fears it's unsafe and has a plan to fix it
‘Godfather of AI' now fears it's unsafe and has a plan to fix it

Asia Times

time15 hours ago

  • Asia Times

‘Godfather of AI' now fears it's unsafe and has a plan to fix it

This week, the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organization dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for the groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI.' Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different levels of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team is adding to Scientist AI is the 'world model,' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Yoshua Bengio is recognized as one of the godfathers of AI. Alex Photo: Wong / Getty Images via The Conversation Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritize safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is lecturer, School of Computer Science, University of Sydney This article is republished from The Conversation under a Creative Commons license. Read the original article.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store