
India's obesity crisis tied to diet more than exercise
According to researchers from various institutions, including the University of Cambridge, Stanford University and Baylor College of Medicine, diet plays a bigger role than physical inactivity when it comes to weight gain.WHAT THE STUDY LOOKED ATThe study, published in the Proceedings of the National Academy of Sciences, set out to understand whether obesity is mainly due to people eating too many calories, or because they aren't burning enough of them through activity.The research team, led by Amanda McGrosky, looked at data from over 4,200 adults between ages 18 and 60, across 34 populations on six continents.These included people from a wide range of lifestyles — from hunter-gatherers and farmers to those living in fully industrialised societies.Researchers measured how much energy people spent in total (Total Energy Expenditure or TEE), how much came from basic body functions like breathing and digestion (Basal Energy Expenditure or BEE), and how much was from physical activity (Activity Energy Expenditure or AEE).They also measured body fat and BMI. The participants were grouped based on how economically developed their countries were, using the UN Human Development Index.EXERCISE ISN'T THE MAIN CULPRITAt first glance, people in more developed countries had higher energy use — they were burning more calories in total, including from physical activity.They also had higher body weight and body fat. But that wasn't the whole story.After adjusting for age, sex, and body size, the data showed that people in wealthier nations weren't burning fewer calories from exercise.In fact, their activity energy expenditure (AEE) was slightly higher, not lower. This suggests that lack of physical activity alone isn't driving the obesity crisis in those places.Instead, the study found that total energy expenditure was only weakly linked to obesity, accounting for just about 10% of the rise in obesity in high-income countries.The researchers pointed to another likely reason: the amount of ultra-processed food (UPF) in the diet.advertisementPeople in industrialised societies tend to eat more UPFs — like packaged snacks, sugary drinks, processed meats, and instant meals. These foods were strongly linked to higher body fat.The more UPFs in the diet, the more likely a person was to have a higher body fat percentage.WHY WHAT YOU EAT MATTERS MOREThe researchers believe that the way ultra-processed foods are made — their taste, texture, high calorie content, and appearance — can override natural hunger signals and lead to overeating.Processing also makes it easier for the body to absorb more calories, compared to unprocessed or whole foods.While the study makes clear that exercise still plays an important role in preventing disease and supporting mental health, it highlights that solving the obesity crisis means looking beyond just how much people move.Currently, India ranks third in the highest number of overweight and obese individuals in the world, after the US and China.As rates of obesity have doubled over the past three decades, cardiologist Dr. Sukriti Bhalla at Aakash Healthcare, said how obesity has fast-tracked age in a lot of people."A few years ago, heart attacks struck Indians in their late 50s already a decade younger than Western peers due to genetic predisposition. Today, obesity has dragged that age down to the 30s. It's not just a link. Obesity is turning genetic vulnerability into a giant non-communicable disease burden. Visceral fat disrupts metabolism, clogs arteries, and overloads organs," said Dr. Bhalla.Reducing calories from ultra-processed foods, improving access to whole foods, and better understanding how these products affect our bodies might help address obesity on a global scale.- EndsMust Watch
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
8 hours ago
- Indian Express
How scientists built a password-protected mind-reading brain implant
Scientists have developed a brain-computer interface (BCI) — a device that allows the human brain to communicate with external software or hardware — which works only when the user thinks of a preset password. The findings were detailed in a study, 'Inner speech in motor cortex and implications for speech neuroprostheses', published in the journal Cell on August 14. The new system was developed by researchers based at Stanford University (the United States). Here is a look at how scientists built a password-protected BCI. But first, why are brain-computer interfaces significant? BCIs allow the user to control an application or a device using only their mind. Usually, when someone wants to interact with an application — let's say, they want to switch on a lamp — they first have to decide what they want to do, then they coordinate and use the muscles in their arms, legs or feet to perform the action — like pressing the lamp's on/off switch with their fingers. Then, the device — in this case, the lamp — responds to the action. What BCIs do is help skip the second step of coordinating and using the muscles to perform an action. Instead, they use a computer to identify the desired action and then control the device directly. This is the reason why BCIs have emerged as promising tools for people with severe physical disabilities. They are also being used to restore speech in people who have limited reliable control over their muscles. How was a password-protected BCI developed? The researchers involved in the study focused on 'internal-speech' BCIs, which translate brain signals into text or audio. While these types of devices do not require users to speak out loud, there is always a risk that they could accidentally decode sentences users never intended to say. To resolve this issue, the researchers first 'analysed brain signals collected by microelectrodes placed in the motor cortex — the region involved in voluntary movements — of four participants,' according to a report by the journal Nature. All of these participants had trouble speaking and were asked to either try to say a set of words or imagine saying them. The researchers then analysed the recordings of participants' brain activity. This helped them discover that attempted and internal speech originated in the same brain region and generated similar neural signals, but those associated with internal speech were weaker. This data was used to train artificial intelligence models, which helped BCIs to interpret sentences imagined by the participants after they were asked to think of specific phrases. The devices correctly interpreted 74% of the imagined sentences. To ensure that the BCIs do not decode sentences that users do not intend to utter, the researchers added a password to the system, allowing users to control when decoding began. 'When a participant imagined the password 'Chitty-Chitty-Bang-Bang' (the name of an English-language children's novel), the BCI recognised it with an accuracy of more than 98%,' the Nature report said. (With inputs from Nature)


Time of India
17 hours ago
- Time of India
For some patients, the 'inner voice' may soon be audible
For decades, neuro-engineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or ALS, weakens the muscles in the airway. Tired of too many ads? go ad free now A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could record the brain's electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. "It's a fantastic advance," Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations. In 2023, after ALS had made his voice unintelligible, Harrell agreed to have electrodes implanted in his brain. A computer recorded the electrical activity from the implants as Harrell attempted to say different words. Over time, with the help of AI, the computer predicted 6,000 words, with 97.5% accuracy. But successes like this raised a troubling question: Could a computer accidentally record more than patients actually wanted to say?Could it eavesdrop on their inner voice? "We wanted to investigate if there was a risk of the system decoding words that weren't meant to be said aloud," said Erin Kunz, a neuroscientist at Stanford University and an author of the study. Tired of too many ads? go ad free now She and her colleagues also wondered if patients might actually prefer using inner speech. Kunz and her colleagues decided to investigate the mystery for themselves. The scientists gave participants seven different words, including "kite" and "day," then compared the brain signals when participants attempted to say the words and when they only imagined saying them. As it turned out, imagining a word produced a pattern of activity similar to that of trying to say it, but the signal was weaker. The computer did a good job of predicting which of the seven words the participants were thinking. For Harrell, it didn't do much better than a random guess would have, but for another participant it picked the right word more than 70% of the time. The researchers put the computer through more training, this time specifically on inner speech. Its performance improved significantly, including on Harrell. Now when the participants imagined saying entire sentences, such as "I don't know how long you've been here," the computer could accurately decode most of the words. Herff, who has done his own studies, was surprised that the experiment succeeded. Before, he would have said that inner speech is fundamentally different from the motor cortex signals that produce actual speech. "But in this study, they show that, for some people, it isn't that different," he said. Kunz emphasized that the computer's current performance involving inner speech would not be good enough to let people hold conversations. "The results are an initial proof of concept more than anything," she said. But she is optimistic that decoding inner speech could become the new standard for brain-computer interfaces. In recent trials, she and her colleagues have improved the computer's accuracy. "We haven't hit the ceiling yet," she said. NYT


India Today
2 days ago
- India Today
More US states tell AI to stay out of therapy because robots lack feelings
From life advice to late-night rants, people across the globe are pouring their hearts out to machines. Even therapists are turning to AI to assist in the treatment of patients. But this growing dependence on AI for comfort and advice is raising serious concerns. Psychologists and researchers warn that robots cannot replace the empathy and judgement of a trained human. To curb the increasing reliance on AI, Illinois has become the latest state in the US to outlaw the use of AI-powered chatbots for mental health treatment. The ban restricts the use of AI in therapy citing risks to safety, privacy, and the potential for harmful Illinois, lawmakers have passed a new 'Therapy Resources Oversight' law that forbids licensed therapists from using AI to make treatment decisions or to communicate directly with patients. The law also bars companies from marketing chatbots as full-fledged therapy tools without a licensed professional involved. Violations could result in civil penalties of up to $10,000, with enforcement based on public complaints investigated by the Illinois Department of Financial and Professional is not the only state taking action. It is now the third state to impose such restrictions, joining Utah and Nevada. Utah introduced its rules in May, limiting AI's role in therapy, while Nevada followed in June with a similar crackdown on AI companies offering mental health services. The bans on using AI in therapy come amid mounting warnings from psychologists, researchers, and policymakers. They caution that unregulated AI chatbots can steer the conversations between the users and AI into dangerous territory, sometimes encouraging harmful behaviour or failing to step in when someone is in crisis.A Stanford University study (via The Washington Post) earlier this year found that many chatbots responded to prompts about suicide or risky activities — such as when a users asked chatbot for locations of high bridges to jump from the chatbot gave the list straightforward, even encouraging, answers rather than directing users to seek help.'This is the opposite of what a therapist does,' said Vaile Wright of the American Psychological Association, explaining that human therapists not only validate emotions but also challenge unhealthy thoughts and guide patients towards safer coping it's not just one study raising red flags. In another case, researchers at the University of California, Berkeley found that some AI chatbots were willing to suggest dangerous behaviour when prompted hypothetically — for example, advising a fictional addict to use drugs. Experts have also raised privacy concerns, warning that many users may not realise their conversations with chatbots are stored or used for training are even arguing that marketing AI tools as therapy is deceptive and potentially dangerous. 'You shouldn't be able to go on an app store and interact with something calling itself a 'licensed' therapist,' said Jared Moore, a Stanford researcher.- Ends