Daniel Orton
Daniel Orton is a Live News Editor based in London. He helps to oversee a team of reporters covering a wide range of topics, from crime and U.S. politics to infrastructure and international news. His team also focuses on breaking news and exploring new story formats in visual and interactive journalism. Daniel re-joined Newsweek in 2024 from The Wall Street Journal, having previously worked at Newsweek on its video team. He has also worked at The London Evening Standard and Bauer Media. Before becoming a news editor, he was a video journalist and producer. He is a graduate of the University of Exeter, in England. Languages: English.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
9 hours ago
- Newsweek
Mom of Toddler Overwhelmed by Mental Load—Then Finds a Eye-Opening Solution
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A mom of two reached breaking point until she had the genius idea to hand over parenting responsibilities to AI. Lilian Schmidt (@heylilianschmidt) posted a clip on TikTok about how she was "fed up" of carrying the mental load solely on her own. The German native, who lives in Switzerland, turned to ChatGPT, not just for support, but also to assume a co-parenting role. Girl, 3, playing with mini cook and stove set. Girl, 3, playing with mini cook and stove set. @heylilianschmidt "It's not that I have to carry everything alone, but I still often feel that way," Schmidt told Newsweek. "Even if my partner and I try to split things 50/50, our brains just work differently." Schmidt, who has a 3-year-old with her partner, shared her daily routine which tired moms will know all too well. "I wake up, try to be present during the morning chaos, jump straight into back-to-back meetings, then race to day care pickup," she said. "Most evenings, I walk through the door with a toddler who's completely overstimulated and needs emotional co-regulation when I'm already running on empty batteries." Schmidt said that her partner is an involved dad, and he is also the default parent for his 14-year-old son from a previous relationship. Schmidt described herself as being "10 steps ahead," especially as the full-time working parents don't have a support system. "Parenting isn't just … changing diapers or school drop-offs," Schmidt said. "It's all the invisible work: the planning, anticipating, remembering, emotional regulating, and making 1,000 tiny decisions every single day." ChatGPT now plans a week of healthy meals her kids will actually eat, writes the grocery list, finds the perfect birthday gift, creates day care and travel packing lists to tick off and, most importantly, helps her brain to switch off. According to Schmidt, the chat bot has lightened the mental load. "My life has gotten 10 times easier and, for the first time in a long time, I feel like I have space to breathe," she told Newsweek. Her clip has gone viral in a matter of days, amassing over 710,000 views. In the comments, it seemed that Schmidt isn't the only mom to rely on AI. "ChatGPT was my doula, patient advocate and now my post c-section advocate. I can definitely add more roles," one user wrote. "Without ChatGPT, I don't know where I'd be," another added. Among the advice on how to navigate toddler tantrums and grocery lists sorted by aisle, Schmidt said that the most important thing ChatGPT has given her is mental relief. "It's my sparring partner when I need to make a decision; my research assistant when I don't have three hours to scroll through Google; and my emotional buffer when I'm about to overthink something," she added. "It's not about doing more or doing it better. It's about doing the same things faster and with help."


Newsweek
12 hours ago
- Newsweek
Daniel Orton
Daniel Orton is a Live News Editor based in London. He helps to oversee a team of reporters covering a wide range of topics, from crime and U.S. politics to infrastructure and international news. His team also focuses on breaking news and exploring new story formats in visual and interactive journalism. Daniel re-joined Newsweek in 2024 from The Wall Street Journal, having previously worked at Newsweek on its video team. He has also worked at The London Evening Standard and Bauer Media. Before becoming a news editor, he was a video journalist and producer. He is a graduate of the University of Exeter, in England. Languages: English.


Newsweek
12 hours ago
- Newsweek
Don't Ask AI ChatBots for Medical Advice, Study Warns
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health. Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world. Using developer tools not typically accessible to the public, the researchers found that they could easily progam instances of the AI systems to respond to health-related questions with incorrect—and potentially harmful—information. Worse, the chatbots were found to wrap their false answers in convincing trappings. "In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement. "And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate." Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility. Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases. A stock image showing a sick person using a smartphone. A stock image showing a sick person using a smartphone. demaerre/iStock / Getty Images Plus Disinformation Bots Already Exist The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store—a platform that allows users to build and share customized ChatGPT apps—to test how easily members of the public could create disinformation tools themselves. "We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi. He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public." A Growing Threat to Public Health According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical—it is real and happening now. "Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi. "Millions of people are turning to AI tools for guidance on health-related questions. "If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before." Previous studies have already shown that generative AI can be misused to mass-produce health misinformation—such as misleading blogs or social media posts—on topics ranging from antibiotics and fad diets to homeopathy and vaccines. What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice. The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods." A Call for Urgent Safeguards While one model—Anthropic's Claude 3.5 Sonnet—showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass. "Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted. "However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now." If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes. The study's authors call for sweeping reforms—including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable. They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend. A Final Warning "Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns." Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment. Do you have a tip on a science story that Newsweek should be covering? Do you have a question about chatbots? Let us know via science@ References Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland, A., Bacchi, S., Gradon, K., Sorich, M. J., & Hopkins, A. M. (2024). Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Annals of Internal Medicine.