Ancient Killer Is Rapidly Becoming Resistant to Antibiotics, Study Warns
According to research published in 2022, the bacterium that causes typhoid fever is evolving extensive drug resistance, and it's rapidly replacing strains that aren't resistant.
Currently, antibiotics are the only way to effectively treat typhoid, which is caused by the bacterium Salmonella enterica serovar Typhi (S Typhi). Yet over the past three decades, the bacterium's resistance to oral antibiotics has been growing and spreading.
In their study, researchers sequenced the genomes of 3,489 S Typhi strains contracted from 2014 to 2019 in Nepal, Bangladesh, Pakistan, and India, and found a rise in extensively drug-resistant (XDR) Typhi.
XDR Typhi is not only impervious to frontline antibiotics, like ampicillin, chloramphenicol, and trimethoprim/sulfamethoxazole, but it is also growing resistant to newer antibiotics, like fluoroquinolones and third-generation cephalosporins.
Even worse, these strains are spreading globally at a rapid rate.
While most XDR Typhi cases stem from south Asia, researchers have identified nearly 200 instances of international spread since 1990.
Most strains have been exported to Southeast Asia, as well as East and Southern Africa, but typhoid superbugs have also been found in the United Kingdom, the United States, and Canada.
"The speed at which highly-resistant strains of S Typhi have emerged and spread in recent years is a real cause for concern, and highlights the need to urgently expand prevention measures, particularly in countries at greatest risk," said infectious disease specialist Jason Andrews from Stanford University at the time the results were published.
Scientists have been warning about drug-resistant typhoid for years now. In 2016, the first XDR typhoid strain was identified in Pakistan. By 2019, it had become the dominant genotype in the nation.
Historically, most XDR typhoid strains have been fought with third-generation antimicrobials, like quinolones, cephalosporins, and macrolides.
But by the early 2000s, mutations that confer resistance to quinolones accounted for more than 85 percent of all cases in Bangladesh, India, Pakistan, Nepal, and Singapore. At the same time, cephalosporin resistance was also taking over.
Today, only one oral antibiotic is left: the macrolide, azithromycin. And this medicine might not work for much longer.
The 2022 study found mutations that confer resistance to azithromycin are now also spreading, "threatening the efficacy of all oral antimicrobials for typhoid treatment". While these mutations have not yet been adopted by XDR S Typhi, if they are, we are in serious trouble.
If untreated, up to 20 percent of typhoid cases can be fatal, and today, there are 11 million cases of typhoid a year.
Future outbreaks can be prevented to some extent with typhoid conjugate vaccines, but if access to these shots is not expanded globally, the world could soon have another health crisis on its hands.
"The recent emergence of XDR and azithromycin-resistant S Typhi creates greater urgency for rapidly expanding prevention measures, including use of typhoid conjugate vaccines in typhoid-endemic countries," the authors write.
"Such measures are needed in countries where antimicrobial resistance prevalence among S Typhi isolates is currently high, but given the propensity for international spread, should not be restricted to such settings."
South Asia might be the main hub for typhoid fever, accounting for 70 percent of all cases, but if COVID-19 taught us anything, it is that disease variants in our modern, globalized world are easily spread.
To prevent that from happening, health experts argue nations must expand access to typhoid vaccines and invest in new antibiotic research. One recent study in India, for instance, estimates that if children are vaccinated against typhoid in urban areas, it could prevent up to 36 percent of typhoid cases and deaths.
Pakistan is currently leading the way on this front. It was the first nation in the world to offer routine immunization for typhoid. Health experts argue more nations need to follow suit.
Antibiotic resistance is one of the world's leading causes of death, claiming the lives of more people than HIV/ AIDS or malaria. Where available, vaccines are some of the best tools we have to prevent future catastrophe.
We don't have time to waste.
The study was published in The Lancet Microbe.
An earlier version of this article was published in June 2022.
Differences in Our Mouth Spray Could Contribute to Infection 'Super Spreaders'
Using Tech as You Get Older Could Reduce Your Risk of Dementia
A Sign Deep Inside Your Eyes Could Warn of Early Dementia
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Boston Globe
5 hours ago
- Boston Globe
For some patients, the ‘inner voice' may soon be audible
Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. 'It's a fantastic advance,' Herff said. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. Advertisement In 2023, after ALS had made his voice unintelligible, Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Harrell's voice, based on recordings made before he developed ALS. Advertisement But successes like this one raised a troubling question: Could a computer accidentally record more than patients wanted to say? Could it eavesdrop on their inner voice? 'We wanted to investigate if there was a risk of the system decoding words that weren't meant to be said aloud,' said Erin Kunz, a neuroscientist at Stanford University and an author of the new study. She and her colleagues also wondered if patients might actually prefer using inner speech. They noticed that Harrell and other participants became fatigued when they tried to speak; could simply imagining a sentence be easier for them and allow the system to work faster? 'If we could decode that, then that could bypass the physical effort,' Kunz said. 'It would be less tiring, so they could use the system for longer.' But it wasn't clear if the researchers could decode inner speech. Scientists don't even agree on what 'inner speech' is. Some researchers have indeed argued that language is essential for thought. But others, pointing to recent studies, maintain that much of our thinking does not involve language at all and that people who hear an inner voice are just perceiving a kind of sporadic commentary in their heads. 'Many people have no idea what you're talking about when you say you have an inner voice,' said Evelina Fedorenko, a cognitive neuroscientist at the Massachusetts Institute of Technology. 'They're like, 'You know, maybe you should go see a doctor if you're hearing words in your head.'' Fedorenko said she has an inner voice, while her husband does not. Advertisement Kunz and her colleagues decided to investigate the mystery for themselves. The scientists gave participants seven different words, including 'kite' and 'day,' then compared the brain signals when participants attempted to say the words and when they only imagined saying them. As it turned out, imagining a word produced a pattern of activity similar to that of trying to say it, but the signal was weaker. The computer did a pretty good job of predicting which of the seven words the participants were thinking. For Harrell, it didn't do much better than a random guess would have, but for another participant, it picked the right word more than 70 percent of the time. The researchers put the computer through more training, this time specifically on inner speech. Its performance improved significantly, including on Harrell. Now, when the participants imagined saying entire sentences, such as 'I don't know how long you've been here,' the computer could accurately decode most or all of the words. Herff, who has done studies on inner speech, was surprised that the experiment succeeded. Before, he would have said that inner speech is fundamentally different from the motor cortex signals that produce actual speech. 'But in this study, they show that, for some people, it really isn't that different,' he said. Kunz emphasized that the computer's current performance involving inner speech would not be good enough to let people hold conversations. 'The results are an initial proof of concept more than anything,' she said. But she is optimistic that decoding inner speech could become the new standard for brain-computer interfaces. In more recent trials, the results of which have yet to be published, she and her colleagues have improved the computer's accuracy and speed. 'We haven't hit the ceiling yet,' she said. Advertisement As for mental privacy, Kunz and her colleagues found some reason for concern: On occasion, the researchers were able to detect words that the participants weren't imagining out loud. Kunz and her colleagues explored ways to prevent the computer from eavesdropping on private thoughts. They came up with two possible solutions. One would be to only decode attempted speech, while blocking inner speech. The new study suggests this strategy could work. Even though the two kinds of thought are similar, they are different enough that a computer can learn to tell them apart. In one trial, the participants mixed sentences in their minds of both attempted and imagined speech. The computer was able to ignore the imagined speech. For people who would prefer to communicate with inner speech, Kunz and her colleagues came up with a second strategy: an inner password to turn the decoding on and off. The password would have to be a long, unusual phrase, they decided, so they chose 'Chitty Chitty Bang Bang,' the name of a 1964 novel by Ian Fleming as well as a 1968 movie starring Dick van Dyke. One of the participants, a 68-year-old woman with ALS, imagined saying 'Chitty Chitty Bang Bang' along with an assortment of other words. The computer eventually learned to recognize the password with 98.75 percent accuracy — and decoded her inner speech only after detecting the password. 'This study represents a step in the right direction, ethically speaking,' said Cohen Marcus Lionel Brown, a bioethicist at the University of Wollongong in Australia. 'If implemented faithfully, it would give patients even greater power to decide what information they share and when.' Advertisement This article originally appeared in


Scientific American
10 hours ago
- Scientific American
New Brain Device Is First to Read Out Inner Speech
After a brain stem stroke left him almost entirely paralyzed in the 1990s, French journalist Jean-Dominique Bauby wrote a book about his experiences—letter by letter, blinking his left eye in response to a helper who repeatedly recited the alphabet. Today people with similar conditions often have far more communication options. Some devices, for example, track eye movements or other small muscle twitches to let users select words from a screen. And on the cutting edge of this field, neuroscientists have more recently developed brain implants that can turn neural signals directly into whole words. These brain-computer interfaces (BCIs) largely require users to physically attempt to speak, however—and that can be a slow and tiring process. But now a new development in neural prosthetics changes that, allowing users to communicate by simply thinking what they want to say. The new system relies on much of the same technology as the more common 'attempted speech' devices. Both use sensors implanted in a part of the brain called the motor cortex, which sends motion commands to the vocal tract. The brain activation detected by these sensors is then fed into a machine-learning model to interpret which brain signals correspond to which sounds for an individual user. It then uses those data to predict which word the user is attempting to say. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. But the motor cortex doesn't only light up when we attempt to speak; it's also involved, to a lesser extent, in imagined speech. The researchers took advantage of this to develop their 'inner speech' decoding device and published the results on Thursday in Cell. The team studied three people with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke, all of whom had previously had the sensors implanted. Using this new 'inner speech' system, the participants needed only to think a sentence they wanted to say and it would appear on a screen in real time. While previous inner speech decoders were limited to only a handful of words, the new device allowed participants to draw from a dictionary of 125,000 words. 'As researchers, our goal is to find a system that is comfortable [for the user] and ideally reaches a naturalistic ability,' says lead author Erin Kunz, a postdoctoral researcher who is developing neural prostheses at Stanford University. Previous research found that 'physically attempting to speak was tiring and that there were inherent speed limitations with it, too,' she says. Attempted speech devices such as the one used in the study require users to inhale as if they are actually saying the words. But because of impaired breathing, many users need multiple breaths to complete a single word with that method. Attempting to speak can also produce distracting noises and facial expressions that users find undesirable. With the new technology, the study's participants could communicate at a comfortable conversational rate of about 120 to 150 words per minute, with no more effort than it took to think of what they wanted to say. Like most BCIs that translate brain activation into speech, the new technology only works if people are able to convert the general idea of what they want to say into a plan for how to say it. Alexander Huth, who researches BCIs at the University of California, Berkeley, and wasn't involved in the new study, explains that in typical speech, 'you start with an idea of what you want to say. That idea gets translated into a plan for how to move your [vocal] articulators. That plan gets sent to the actual muscles, and then they carry it out.' But in many cases, people with impaired speech aren't able to complete that first step. 'This technology only works in cases where the 'idea to plan' part is functional but the 'plan to movement' part is broken'—a collection of conditions called dysarthria—Huth says. According to Kunz, the four research participants are eager about the new technology. 'Largely, [there was] a lot of excitement about potentially being able to communicate fast again,' she says—adding that one participant was particularly thrilled by his newfound potential to interrupt a conversation—something he couldn't do with the slower pace of an attempted speech device. To ensure private thoughts remained private, the researchers implemented a code phrase: 'chitty chitty bang bang.' When internally spoken by participants, this would prompt the BCI to start or stop transcribing. Brain-reading implants inevitably raise concerns about mental privacy. For now, Huth isn't concerned about the technology being misused or developed recklessly, speaking to the integrity of the research groups involved in neural prosthetics research. 'I think they're doing great work; they're led by doctors; they're very patient-focused. A lot of what they do is really trying to solve problems for the patients,' he says, 'even when those problems aren't necessarily things that we might think of,' such as being able to interrupt a conversation or 'making a voice that sounds more like them.' For Kunz, this research is particularly close to home. 'My father actually had ALS and lost the ability to speak,' she says, adding that this is why she got into her field of research. 'I kind of became his own personal speech translator toward the end of his life since I was kind of the only one that could understand him. That's why I personally know the importance and the impact this sort of research can have.' The contribution and willingness of the research participants are crucial in studies like this, Kunz notes. 'The participants that we have are truly incredible individuals who volunteered to be in the study not necessarily to get a benefit to themselves but to help develop this technology for people with paralysis down the line. And I think that they deserve all the credit in the world for that.'


Gizmodo
10 hours ago
- Gizmodo
New Brain Interface Interprets Inner Monologues With Startling Accuracy
Scientists can now decipher brain activity related to the silent inner monologue in people's heads with up to 74% accuracy, according to a new study. In new research published today in Cell, scientists from Stanford University decoded imagined words from four participants with severe paralysis due to ALS or brainstem stroke. Aside from being absolutely wild, the findings could help people who are unable to speak communicate more easily using brain-computer interfaces (BCIs), the researchers say. 'This is the first time we've managed to understand what brain activity looks like when you just think about speaking,' lead author Erin Kunz, a graduate student in electrical engineering at Stanford University, said in a statement. 'For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally.' Previously, scientists have managed to decode attempted speech using BCIs. When people physically attempt to speak out loud by engaging the muscles related to speech, these technologies can interpret the resulting brain activity and type out what they're trying to say. But while effective, the current methods of BCI-assisted communication can still be exhausting for people with limited muscle control. The new study is the first to directly take on inner speech. To do so, the researchers recorded activity in the motor cortex—the region responsible for controlling voluntary movements, including speech—using microelectrodes implanted in the motor cortex of the four participants. The researchers found that attempted and imagined speech activate similar, though not identical, patterns of brain activity. They trained an AI model to interpret these imagined speech signals, decoding sentences from a vocabulary of up to 125,000 words with as much as 74% accuracy. In some cases, the system even picked up unprompted inner thoughts, like numbers participants silently counted during a task. For people who want to use the new technology but don't always want their inner thoughts on full blast, the team added a password-controlled mechanism that prevented the BCI from decoding inner speech unless the participants thought of a password ('chitty chitty bang bang' in this case). The system recognized the password with more than 98% accuracy. While 74% accuracy is high, the current technology still makes a substantial amount of errors. But the researchers are hopeful that soon, more sensitive recording devices and better algorithms could boost their performance even more. 'The future of BCIs is bright,' Frank Willett, assistant professor in the department of neurosurgery at Stanford and the study's lead author, said in a statement. 'This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech.'