Latest news with #chatbots


Daily Mail
3 hours ago
- Business
- Daily Mail
China factory churns out hyper-realistic AI sex dolls that can converse through eight different personalities
Sex doll factories in China have reported booming business thanks to the production of realistic AI chatbots, even have eight personalities, alongside basic silicone dolls. WMDoll, one of the nation's biggest sex doll makers, claimed to be expecting a 30 per cent jump in sales this year, thanks to its adoption of generative AI tools like ChatGPT. WMDoll's founder Liu Jiangxia told the South China Morning Post: 'It makes the dolls more responsive and interactive, which offers users a better experience.' Unlike traditional dolls, those installed with AI capabilities can speak back to the user. WMDoll says it can make figures with about eight different 'personalities' to choose from, and they're capable of continuing a conversation started a few days earlier. The company fits its dolls with an AI tool, designed to pander to its partner's ego and it can can be programmed to ask questions about their 'relationship' and about the user's feelings. WMDoll's Jiangxia said: 'In the past, these dolls' primary function was to satisfy users' sexual needs. But as their physical features, such as head and joint movements and skin, became more realistic, our customers started to seek emotional companionship in the dolls.' Regarding companionship, Jiangxia said that was when the firm decided to introduce large language models into its products, allowing the dolls to 'react to users verbally'. The company started using AI in its dolls in 2016 and the technology has been improving due to open source AI, which has helped to make them cheaper. Dolls are often made with thermoplastic, which is heated to 37C to better reflect a human's body temperature, and developers say the models have body sensors that also make them feel human-like. Another manufacturer Shenzhen Atall Intelligent Robot Technology previously said that most of its clients are men aged 40-50 from the U.S. and Europe. Users can order custom-made AI dolls at around $3,000 each, and they have soft skin made from rubbery plastic that contains less oil than normal plastics and do not smell. U.S. customers are said to like dolls with darker skin and large breasts, buttocks, and genitals, while Chinese customers go for for Asian features with small, hairless genitals, the company said. The firm is selling child-size AI sex robots, both male and female, and the biggest market for them is reportedly in the U.S. and Canada. However, the preference for child-like dolls is nothing new, with police raids around the globe in recent years leading to them being seized and owners arrested. The robot's eyes, lips, and head move and they speak English and Chinese, developers say. Flexible joints mean the dolls can be positioned in a variety of poses. The company also produces custom-made AI sex robots, for $9,400 - two have been ordered by men who wanted them based on their dead wives. Around 70 per cent of customers also ask for hair on the dolls' genital area. Male sex robots are also sold but they are nine times less popular than female robots, according to manufacturers. On Chinese social media, some say the products reinforce sexist stereotypes or endorse pedophilia. Meanwhile others, calling themselves 'friends of dolls', share user reviews and advice on dedicated online forums. One anonymous user said in a review of a standard sex doll: 'The material is quite good, very soft to the touch. When I hold her I feel very comfortable.' China has previously been estimated to make over 80 per cent of the world's sex toys, with over a million people employed in the country's $6.6billion industry. Chinese women's rights activist Xiao Meili thinks that some men will always have outdated expectations, and 'sex housewife robots' might actually help women. Meili told AFP: 'A lot of men want the same from women - sex, housework, childbirth, and filial piety. They don't think of women as individuals. 'If every nerd buys a sex doll for himself ... that would free a lot of women from these kind of men.'


Bloomberg
4 days ago
- Business
- Bloomberg
Dutch Insurer Scraps Old-Style Chat Bots Amid Uneven AI Rollout
NN Group NV is replacing pre-programmed chat bots in client-facing roles with a version that can interact more naturally with humans, further evidence of the continued but uneven roll-out of artificial intelligence in financial services. The insurance firm, one of the largest in the Netherlands, has also launched an AI assistance tool which call-center staff can use to answer client questions. That has resulted in a marked improvement in the response quality, Chief Executive Officer David Knibbe said in an interview on Tuesday.


Bloomberg
6 days ago
- Business
- Bloomberg
ChatGPT's Improved Memory Points Toward a More Personal AI
Welcome to Tech In Depth, our daily newsletter with reporting and analysis about the business of tech from Bloomberg's journalists around the world. Today, Austin Carr reports on the improvements recently announced to the major consumer chatbots that have made it easier for them to know more about their users. OpenAI expansion: The artificial intelligence company has established a legal entity in South Korea and plans to open an office in Seoul as it seeks to propel further adoption of its products.


Coin Geek
6 days ago
- Politics
- Coin Geek
China rolls out new AI guidelines for classroom use
Getting your Trinity Audio player ready... China's Ministry of Education has unveiled guidelines for using artificial intelligence (AI) tools in classrooms by teachers and students. According to a local news outlet report, the new guidelines seek to ensure the safe and responsible use of AI by Chinese primary and high school students. Primary school students are prohibited from having unfettered access to AI chatbots in classrooms or for assignments. On the other hand, middle schoolers can access AI, but their deployment will be limited to exploring the logical structure of AI-based content. Per the guidelines, high schoolers will be encouraged to leverage AI for inquiry-based learning and understanding the technicalities behind AI-based chatbots. An official from the Ministry of Education disclosed that the intent of the 2025 guidelines is the creation of a tiered approach. Despite the differences across the levels, there are several common denominators for AI use in classrooms across the board. Firstly, students are prohibited from submitting AI-generated material as their original work. Furthermore, the rules frown on excessive reliance on AI chatbots for creative tasks while urging students to prioritize critical thinking skills. Across the board, teachers are barred from leaning on AI to grade students or insert private information about students into AI chatbots. The rules maintain that teachers are supposed to use AI chatbots to supplement teaching in the classroom. All provincial educational authorities are expected to establish their data protection rules while setting up a list of accredited AI chatbots in schools. 'AI is a strategic technology driving a new wave of technological and industrial transformation,' said an official from the Ministry of Education. 'It has already reshaped everyday life and set education on a new course of reform and development.' Regulators are urged to wade into prevent AI misuse Since generative AI debuted in late 2022, authorities have hesitated to deploy chatbots in classrooms. Japan picked up the gauntlet to allow limited use in schools, but an avalanche of tech firms unveiling new offerings has sparked concerns. The UN has made frantic calls for tighter AI restrictions in schools, noting that unfettered use will impact the emotional well-being of younger students. Several critics are making a case for imposing age limits in classroom use, while others are calling for a forward-thinking approach for AI and other emerging technologies to improve current learning methods. Egypt to prioritize AI lessons in classrooms across the country for digitization Elsewhere, Egypt is increasing its appetite for emerging technologies, with the latest move being a full embrace of AI in the educational sector. President Abdel-Fattah El Sisi is pushing to introduce AI in schools across the country. The Egyptian president wants to make AI lessons compulsory for students to improve digitization metrics for the North African nation. The plan has gathered significant steam with President Sisi issuing a clear directive to Prime Minister Mostafa Madbouly and the Health and Population Ministry. In a meeting with the Prime Minister and other administrators, the President is eyeing a sweeping change to the country's educational qualifications. The proposed changes will allow primary and high school students to receive up to four hours of AI classes each week. Under the incoming rules, students will lean on AI to aid learning, but authorities are keen on rolling out guidelines for safe and responsible usage. The Ministry will vet the AI courses, while approved chatbots will be whitelisted before mainstream application. President Sisi says the compulsory AI classes will position Egypt as the undisputed regional leader for digital literacy and emerging technology adoption. Already, Cairo University and Ain Shams University have begun offering AI courses for students, with President Sisi targeting a deepening talent pool to power the country's digitization drive. Outside of the classroom, Egypt has its sights on AI deployment in healthcare. Egyptian hospitals are set to begin experimentation with AI-based solutions for diagnostics, after-patient care, and administrative tasks. There are plans to extend AI solutions to agriculture, transport management, security, and the financial sectors. Egypt has previously unveiled an ambitious plan to harness emerging technologies to digitize its local economy. A regional AI race is emerging Egypt has to contend with other regional first-movers in the quest to become the leading AI hub in MENA. Saudi Arabia and the United Arab Emirates (UAE) have launched national AI educational initiatives while injecting capital to trigger adoption rates. However, the mad dash toward AI integration in educational institutions has drawbacks. The UN urges authorities to exercise caution and roll out guardrails to prevent misuse. Educational administrators favor a tiered approach toward AI integration, limiting AI use in primary schools and increasing the pace of adoption in secondary schools. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: How AI transforms social networks with Dmitriy Fabrikant title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">


Sky News
23-05-2025
- Entertainment
- Sky News
Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death
The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled. "This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family's case. "It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," she said in a statement. Megan Garcia, the mother of Sewell Setzer III, claims targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences" in a lawsuit filed in Florida. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia. Sewell shot himself with his father's pistol in February 2024, seconds after asking the chatbot: "What if I come home right now?" The chatbot replied: "... please do, my sweet king." In US Senior District Judge Anne Conway's ruling this week, she described how Sewell became "addicted" to the app within months of using it, quitting his basketball team and becoming withdrawn. He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. "[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling. Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that "knew" or "should have known" that its model "would be harmful to a significant number of its minor customers". The case holds its founders and Google, where the founders began working on the model, responsible for Sewell's death. Ms Garcia launched proceedings against both companies in October. A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm". A Google spokesperson said the company strongly disagrees with the decision. They added that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it". Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. Judge Conway rejected that claim, saying she was "not prepared" to hold that the chatbots' output constitutes speech "at this stage", although she did agree users had a right to receive the "speech" of the chatbots. Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@ in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.