
Typhoid Fever's Deadly Mutation: Ancient Killer Becomes Resistant To Last-Resort Antibiotics
According to a study, "Antibiotic-resistant bacteria that can cause typhoid fever and evade treatment are spreading across the globe, according to international research. Typhoid fever causes about 100,000 deaths a year globally and is usually treatable with antibiotics."
However, the researchers say genetic analysis on blood samples collected in South Asia shows some of the bacteria strains causing the disease now are resistant to commonly used antibiotics. The researchers say these resistant strains have spread between countries nearly 200 times since 1990.
"The speed at which highly resistant strains of S. Typhi have emerged and spread in recent years is a real cause for concern and highlights the need to urgently expand prevention measures, particularly in countries at greatest risk," said infectious disease specialist Jason Andrews from Stanford University at the time the results were published.
Scientists have been warning about drug-resistant typhoid for years. In 2016, a super-resistant strain was found in Pakistan and quickly spread. By 2019, it was the most common type in the country. Now, new research shows that typhoid is getting even more resistant, making treatment harder.
If typhoid isn't treated, it can be deadly for up to 20% of people who get it. Every year, there are 11 million cases of typhoid worldwide. Vaccines can help prevent future outbreaks, but many people don't have access to them. If we don't act now, we could face another major health crisis.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
18 hours ago
- Indian Express
How scientists built a password-protected mind-reading brain implant
Scientists have developed a brain-computer interface (BCI) — a device that allows the human brain to communicate with external software or hardware — which works only when the user thinks of a preset password. The findings were detailed in a study, 'Inner speech in motor cortex and implications for speech neuroprostheses', published in the journal Cell on August 14. The new system was developed by researchers based at Stanford University (the United States). Here is a look at how scientists built a password-protected BCI. But first, why are brain-computer interfaces significant? BCIs allow the user to control an application or a device using only their mind. Usually, when someone wants to interact with an application — let's say, they want to switch on a lamp — they first have to decide what they want to do, then they coordinate and use the muscles in their arms, legs or feet to perform the action — like pressing the lamp's on/off switch with their fingers. Then, the device — in this case, the lamp — responds to the action. What BCIs do is help skip the second step of coordinating and using the muscles to perform an action. Instead, they use a computer to identify the desired action and then control the device directly. This is the reason why BCIs have emerged as promising tools for people with severe physical disabilities. They are also being used to restore speech in people who have limited reliable control over their muscles. How was a password-protected BCI developed? The researchers involved in the study focused on 'internal-speech' BCIs, which translate brain signals into text or audio. While these types of devices do not require users to speak out loud, there is always a risk that they could accidentally decode sentences users never intended to say. To resolve this issue, the researchers first 'analysed brain signals collected by microelectrodes placed in the motor cortex — the region involved in voluntary movements — of four participants,' according to a report by the journal Nature. All of these participants had trouble speaking and were asked to either try to say a set of words or imagine saying them. The researchers then analysed the recordings of participants' brain activity. This helped them discover that attempted and internal speech originated in the same brain region and generated similar neural signals, but those associated with internal speech were weaker. This data was used to train artificial intelligence models, which helped BCIs to interpret sentences imagined by the participants after they were asked to think of specific phrases. The devices correctly interpreted 74% of the imagined sentences. To ensure that the BCIs do not decode sentences that users do not intend to utter, the researchers added a password to the system, allowing users to control when decoding began. 'When a participant imagined the password 'Chitty-Chitty-Bang-Bang' (the name of an English-language children's novel), the BCI recognised it with an accuracy of more than 98%,' the Nature report said. (With inputs from Nature)


Time of India
a day ago
- Time of India
For some patients, the 'inner voice' may soon be audible
For decades, neuro-engineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or ALS, weakens the muscles in the airway. Tired of too many ads? go ad free now A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could record the brain's electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. "It's a fantastic advance," Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations. In 2023, after ALS had made his voice unintelligible, Harrell agreed to have electrodes implanted in his brain. A computer recorded the electrical activity from the implants as Harrell attempted to say different words. Over time, with the help of AI, the computer predicted 6,000 words, with 97.5% accuracy. But successes like this raised a troubling question: Could a computer accidentally record more than patients actually wanted to say?Could it eavesdrop on their inner voice? "We wanted to investigate if there was a risk of the system decoding words that weren't meant to be said aloud," said Erin Kunz, a neuroscientist at Stanford University and an author of the study. Tired of too many ads? go ad free now She and her colleagues also wondered if patients might actually prefer using inner speech. Kunz and her colleagues decided to investigate the mystery for themselves. The scientists gave participants seven different words, including "kite" and "day," then compared the brain signals when participants attempted to say the words and when they only imagined saying them. As it turned out, imagining a word produced a pattern of activity similar to that of trying to say it, but the signal was weaker. The computer did a good job of predicting which of the seven words the participants were thinking. For Harrell, it didn't do much better than a random guess would have, but for another participant it picked the right word more than 70% of the time. The researchers put the computer through more training, this time specifically on inner speech. Its performance improved significantly, including on Harrell. Now when the participants imagined saying entire sentences, such as "I don't know how long you've been here," the computer could accurately decode most of the words. Herff, who has done his own studies, was surprised that the experiment succeeded. Before, he would have said that inner speech is fundamentally different from the motor cortex signals that produce actual speech. "But in this study, they show that, for some people, it isn't that different," he said. Kunz emphasized that the computer's current performance involving inner speech would not be good enough to let people hold conversations. "The results are an initial proof of concept more than anything," she said. But she is optimistic that decoding inner speech could become the new standard for brain-computer interfaces. In recent trials, she and her colleagues have improved the computer's accuracy. "We haven't hit the ceiling yet," she said. NYT


India Today
3 days ago
- India Today
More US states tell AI to stay out of therapy because robots lack feelings
From life advice to late-night rants, people across the globe are pouring their hearts out to machines. Even therapists are turning to AI to assist in the treatment of patients. But this growing dependence on AI for comfort and advice is raising serious concerns. Psychologists and researchers warn that robots cannot replace the empathy and judgement of a trained human. To curb the increasing reliance on AI, Illinois has become the latest state in the US to outlaw the use of AI-powered chatbots for mental health treatment. The ban restricts the use of AI in therapy citing risks to safety, privacy, and the potential for harmful Illinois, lawmakers have passed a new 'Therapy Resources Oversight' law that forbids licensed therapists from using AI to make treatment decisions or to communicate directly with patients. The law also bars companies from marketing chatbots as full-fledged therapy tools without a licensed professional involved. Violations could result in civil penalties of up to $10,000, with enforcement based on public complaints investigated by the Illinois Department of Financial and Professional is not the only state taking action. It is now the third state to impose such restrictions, joining Utah and Nevada. Utah introduced its rules in May, limiting AI's role in therapy, while Nevada followed in June with a similar crackdown on AI companies offering mental health services. The bans on using AI in therapy come amid mounting warnings from psychologists, researchers, and policymakers. They caution that unregulated AI chatbots can steer the conversations between the users and AI into dangerous territory, sometimes encouraging harmful behaviour or failing to step in when someone is in crisis.A Stanford University study (via The Washington Post) earlier this year found that many chatbots responded to prompts about suicide or risky activities — such as when a users asked chatbot for locations of high bridges to jump from the chatbot gave the list straightforward, even encouraging, answers rather than directing users to seek help.'This is the opposite of what a therapist does,' said Vaile Wright of the American Psychological Association, explaining that human therapists not only validate emotions but also challenge unhealthy thoughts and guide patients towards safer coping it's not just one study raising red flags. In another case, researchers at the University of California, Berkeley found that some AI chatbots were willing to suggest dangerous behaviour when prompted hypothetically — for example, advising a fictional addict to use drugs. Experts have also raised privacy concerns, warning that many users may not realise their conversations with chatbots are stored or used for training are even arguing that marketing AI tools as therapy is deceptive and potentially dangerous. 'You shouldn't be able to go on an app store and interact with something calling itself a 'licensed' therapist,' said Jared Moore, a Stanford researcher.- Ends