logo
Extreme amnesia cases, AI, and our imagined futures: in conversation with a Harvard memory researcher

Extreme amnesia cases, AI, and our imagined futures: in conversation with a Harvard memory researcher

Yahoo17-06-2025
We tend to think of human memory as if it's one of those old steel filing cabinets: some information gets stashed inside, and when the time comes, we hope we can find it by flipping through the tabs on a few billion neuron-supported manila folders. But the truth, as science has learned—and continues to learn—is that memory is more than just the attic of our minds. We're realizing that it's a foundational part of how we interpret and imagine our futures. And few have done more work to unlock this powerful reframing of human memory than cognitive psychologist Daniel Schacter.
What he and his team at the Schacter Memory Lab at Harvard continue to tease out in their research is a picture (almost literally, thanks to fMRI) of how memory works in our brains—and that picture looks remarkably like how we imagine.
The systems largely overlap, which implies that memories—fallible, mutable, and spread across almost every region of the brain—are being accessed, consciously and unconsciously, while we do everything from engaging in creative pursuits to problem solving. He calls it 'constructive episodic simulation' (with 'episodic' memories being personal moments we recall, vs. 'semantic' memories, which are more facts and meanings.) The upshot: our memories aren't sepia-toned artifacts, but modular building blocks. And our brains are using them like a 5-year-old plays with Lego—no instructions, and plenty of experimentation.
And just to complicate matters more, Schacter's research has implied that the inherent messiness of memory—which he organized in his 1996 book The Seven Sins of Memory—may actually be a feature, not a bug. (Even if it doesn't feel that way when, say, you blank on the name of a person you've known for years.)
Intrigued by all of the above, I spoke to Schacter recently to better understand this expansive notion of memory and imagination and how they function on a 'common brain network.' He has the professorly mien of a man who's been teaching psychology for 34 years at Harvard: specific and circumspect, with an owl's suspicious glare but the patience of a saint when faced with a journalist trying to tease out the mysteries of memory that Schacter's devoted nearly a half-century to exploring.
In the process, Schacter touched on what A.I. and memory might actually have in common, the effects of being force-fed memories by our phones, and the acutely amnesic patient who helped inspire his research.
This interview has been edited and condensed for clarity.
What's your first memory of doing memory-focused research?
One of the first patients I tested. It was the first one. He seemed like a fairly bright guy—had a normal conversation, like we're having now. He didn't do very well in recalling, like, a word list, for example. That wasn't surprising. But what was surprising was when I had to go out of the room for some reason, and came back like two minutes later, he had no idea who I was. He had no idea what we'd been doing. And that is really what caught my attention. When you see a patient with a memory disorder that severe—who in other respects seems normal—carrying on a pleasant conversation, but is that impaired? Yeah.
What was your first eureka moment in doing this research?
That involved various observations related to what I later called 'implicit memory,' which was something that had been observed clinically in amnesic patients—they would show some effect of a prior experience without really having any recollection of [it]. That hadn't been named and really crystallized, but there were indications in literature from the '70s that even though these amnesic patients couldn't tell you what happened 10 minutes ago, they nonetheless could be impacted by it.
One of my first direct experiences with that was a patient who I tested, as a graduate student, who exhibited this related phenomenon that we call 'source amnesia.' So this is where I would tell the patient some obscure fact or a made-up fact; 'Bob Hope's father was a fireman' is one from some of the experiments we did. Time would pass and I'd say, "Do you know what job Bob Hope's father had?' And the patient would say, 'Oh, I think he's a fireman?'
[Here, Schacter pantomimes a conversation:]
'How do you know?'
'Oh, I heard it on the radio or a friend told me that a couple years ago.'
'Did I ever mention that?'
'No, no, you never mentioned that.'
'Well, actually, I just said it two minutes ago.'
So the first time you see that with your own eyes, that's pretty impactful.
What, for you, in these past 50 years has been the biggest change in how science understands memory?
During the time period that I've been working in our lab, we've been focusing less on memory just as a repository of information about past experiences or a retrieval of stored information, and looking at it more for the role it plays in thinking ahead to the future—simulating possible future experiences. For its predictive aspects. You're using information to think ahead: how you want to go about solving a problem or planning your day. Many of the same brain regions that support your ability to go back and remember past experiences [are the] very same brain regions involved in you imagining your future.
I get the feeling that memory is now understood to not be just a single discrete boxed-in function within our brains, and that in some ways it's maybe the foundational operating system of how our brains consciously and subconsciously work.
Maybe that's a little bit expansive, but what you've been studying lately around memory's link to creativity and imagination seems to point in that direction.
I think it does point in that direction. We're using the term 'memory,' here, but we have to keep in mind there's not just one memory. Certainly there was a distinction out there between short-term and long-term memory. As we got into the 1980s we focused on the implicit versus explicit memory distinction, where under 'explicit' you could group 'episodic' and 'semantic'—and under 'implicit,' a whole bunch of different things: priming procedural learning, some kinds of conditioning…
That goes back to what we were talking about before: that amnesic patients, for example, can show intact implicit memory without any corresponding explicit memory. So when we talk about memory, we always have to remember: it's not just one thing. That's something that I think is better understood as part of the way we think about memory since I've been involved in the field.
Once you start to dig into the concept of memory, does it start to feel slightly philosophical for you?
Well, I mean, there are always elements of philosophical perspective when we're talking about these kinds of distinctions, but we do try to root it in empirical observations. I would say probably for me, the first half of my career was more focused on different kinds of memory—trying to link up different kinds of explicit and implicit memory with different brain regions. And the second part has been more focused on looking specifically at how we use our episodic memories in a flexible way—to take bits and pieces of past experience and recombine them and construct simulations, and all of that stuff.
Take me down that path a little bit—the notion that memory underpins our imagination in a powerful way.
When I was in Toronto and we were at the unit for memory disorders, my interests were mainly focused on implicit versus explicit memory and source amnesia. There was one patient who we were very interested in back then. He was known in literature by the initials K.C.—that was short for Kent Cochran. Kent was a young man who, in the early 1980s, had a head injury in a motorcycle accident, and he happened to have brain damage that produced one of these very severe amnesic syndromes. It's fair to say that he could not remember a single specific episode from any time in his life.
There was a testing session [in] 1983 or 1984 where [psychologist and neuroscientist] Endel Tulving and myself were there, with K.C.seated on the other side of the table with us. Tulving asked him this seemingly innocent question: Tell me what you think you're going to be doing tomorrow.
Now, we know that when you ask K.C., tell me what you did yesterday, he'll say: 'I can't remember any one thing I did. Maybe I had breakfast, then I had lunch.' A script-like response. And the same thing happened when Tulving asked him, tell me what you're going to do tomorrow, K.C. just said, 'Well, I don't know.' If you pushed hard enough, he would eventually say, 'Well, maybe I'll have breakfast and lunch.' But he couldn't conjure up any one specific episode of something he might do in the future, just like he couldn't remember what he had done in the past. That was very striking, and suggested a role for episodic memory in imagining the future.
The question was, how do you study that? We as memory researchers knew how to study memory for past events—but how do you study imagination of future events? So I put it on the back burner. Nobody in the field was particularly interested in the issue of memory and future thinking [at that time].
Then, in the early 2000s, some people started publishing papers on the similarities between remembering the past and imagining the future. In 2005, I had a new postdoc in the lab by the name of Donna Rose Addis, and she was doing functional MRI studies of autobiographical memory. It'd been in the back of my mind for 20 years to look at the relationship between remembering the past and imagining the future, so the two of us talked and thought, hey, what if we do a standard autobiographical memory experiment where we've thrown in a future imagining condition? You give a keyword, and in some trials the subject—while being scanned—is asked to remember a past experience, and in other trials is asked to imagine a future experience, and in still other trials you give them control tasks that don't involve remembering or imagining.
And we found this really striking result of similar brain regions showing increased activity when people remember the past and imagine the future. We published our paper in 2007, and that was the year the field got interested in this question as a result of our study. That really then set the agenda for the 18 years since then, and we've been continuing to look at this issue in a variety of ways.
As you saw those fMRI scans come in and saw that overlap, was that like a real tap-dance moment for you?
I mean, you're not analyzing it subject-by-subject, but…yeah, that was very striking.
One of the theoretical ideas that has guided us, that we put forward back in that paper in 2007, was that maybe one of the reasons we experience certain kinds of memory errors is, it's a byproduct of a memory system that's set up to allow us to use our past in very flexible ways—to recombine bits and pieces of experiences so that we can simulate or imagine novel experiences in the future. Because the future is rarely identical to the past, so we want to think about how we're going to deal with new upcoming situations that we haven't experienced before.
A byproduct of an adaptive system that generally works well and allows us to flexibly use our past experience to think about the future may be that we're prone to certain kinds of memory errors, when elements of different experiences get miscombined.
What do you feel like the average layperson gets wrong about their concept of memory?
I think the general idea that memory is more or less like a data recorder or a photograph that fades. We all know we don't remember exactly what happened in every detail—that [our brain] more or less records what happens, but it fades over time. That's what I call transience in [my book]The Seven Sins of Memory. But I think people are less aware of some of the other influences that can change or distort memory. I think one of the questions you raised about self, one of the seven sins I refer to, is 'bias.' It takes various forms, but one is, for example, tendencies to remember the past in many situations as better than it actually was, or in ways that bolster our memory. I think we're not aware of a lot of the top-down influences on memory that exist.
One of the very recent studies from our lab was a collaboration with another lab here at Harvard, led by Jill Hooley, a clinical psychologist. A couple of our graduate students got together, and we looked at the impact of grandiose narcissism on remembering the past and imagining the future. And the upshot of this study was that people who score really high on grandiose narcissism, who think they're the greatest thing in the world, tend to remember the past and also imagine the future in a highly exaggerated positive way compared to people who were lower in narcissism.
It does sound like a blessing to feel so optimistic about the future.
Right, that's right.
And there's a lot of work along those lines that shows how our concepts of ourselves bias the way we remember past experiences. And then we and others have been showing that that same influence exists when we imagine into the future. But I think that's one of the things that probably is hard for us to just grasp intuitively, how our memories are changed or affected by our sense of [ourselves].
It's fascinating to think that memories can take on these different casts and tones based on our own feelings about ourselves, and create a cycle that feeds into itself psychologically.
Along these lines, one of the underappreciated aspects of memory concerns the potency of the act of retrieving a memory. That retrieval can do various things, but among them, change a memory. So retrieving memory is not just a neutral event. It's not like bringing up a file on your computer and then putting it back with no changes. Depending on the circumstances, retrieving a memory—talking about it—can introduce all kinds of interesting distortions potentially into the memory.
Related to that, I've noticed how so much of our technology force feeds us memories. My partner has that widget where a different photo shows up on her phone every time she turns it on. She'll say, Oh, remember this? And it's from a vacation we took years ago. Do you think that there's an effect on memory when we're being fed these moments, but outside their context? Is it deepening them? Is it warping them when a photo comes up and I'm thinking about the memory on a Tuesday during a Zoom meeting?
I think there are several things that can happen. One, there's potentially a strengthening effect for some aspect of the memory. You're reminded that this event took place and maybe the information that's in the photo that you're looking at becomes strengthened. It could potentially distort your memory because of what's not shown—you know, maybe there are other important things that took place in that event that aren't in the photo. So you start remembering it in a different way than you would have otherwise, had you just been thinking about it on your own.
And we know there's a really interesting phenomenon that's very well established, called retrieval-induced forgetting. And this is the idea that when you activate a memory, information related to that memory that you don't retrieve may become more difficult to retrieve later on.
Every time one of those photos comes up, it's reshaping that memory in a way I may or may not intend.
In interesting ways, yeah. Strengthening some aspects, weakening others.
I've got to ask about A.I., which seems to be everywhere right now. When it comes to large language models like ChatGPT, A.I. seems to mirror the concept of memory: it's this murky, probabilistic process based on really deep wells of information, though you have no clue what it's accessing and not. It hallucinates in ways that nobody really quite understands. It works off of both literal prompts and subtle, sometimes unintentional context cues in the same way that memory seems to. Do you look at these A.I.s, and think that they're close to the notion of memory the way we understand it?
Well, they're operating in a slightly different way than we are, I think. They're certainly capable of making some interesting memory errors. And, you know, that may be one of the telltale signs of relying on A.I. The field of cognitive psychology has gotten interested in the question of: To what extent does A.I. mirror human cognition?
We have one published study where we were looking at how people come up with creative stories in response to just a few word cues. As in: Try to write a creative story based on these three unrelated words. And we did it with ChatGPT-3 and ChatGPT-4. Could you tell the difference, really, between the creativity of these stories that humans came up with and what the two large language models came up with? The answer was no, they were about equally creative.
And there's the interesting question of, why do they hallucinate? What are they using as a criterion to say, yeah, that reference is what I'm looking for. I don't think we understand all the deep underpinnings of large language models enough to compare them in detail to human cognition and human memory. But they do make some of the errors that look like the kinds of mistakes people make.
If a genie granted you one wish, what's the one question about how memory functions that you'd want answered?
Wow. That's a tough one.
Or maybe to phrase it a little bit better, what's the one answer that would unlock something for you about our understanding of memory? That maybe, because of technology or the limits of science at the moment, you're stuck on.
You know, I think for me, it would be really having a deeper understanding of how we go about pulling together these different aspects of experience to turn them into, for example, future simulations.
As in, what route do we take?
Yeah. What exactly are the underlying neural pathways that are involved? How do we go from that mode of retrieving an experience and traditional sense of memory? What, at the level of underlying brain pathways, distinguishes that from using information to go back into the past? How does that differ—at the level of the relevant neural processes—from using information to think about the future or solve a problem? What is it that we do that allows us to shift those different modes of retrieval? I think that's one that really interests me.
This article is part of Your Memory, Rewired, a National Geographic exploration into the fuzzy, fascinating frontiers of memory science—including advice on how to make your own memory more powerful. Learn more.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Astronomer Suggests New Interstellar Object Could be Advanced Aliens Testing Our Intelligence
Astronomer Suggests New Interstellar Object Could be Advanced Aliens Testing Our Intelligence

Yahoo

time2 hours ago

  • Yahoo

Astronomer Suggests New Interstellar Object Could be Advanced Aliens Testing Our Intelligence

A strange object hurdling through our solar system from interstellar space may, according to one of academia's most controversial astronomers, have been sent by aliens to see how smart we are. The newly-discovered object, dubbed 3I/ATLAS, is only the third interstellar object of its kind to have been observed visiting our solar system. While most astronomers, including those at NASA, believe it to be a comet, Harvard's resident alien-hunter Avi Loeb has repeatedly suggested that it was sent to us by an extraterrestrial civilization — and may even function as something of a "Turing Test" for humanity. In a new blog post, Loeb — who has become infamous in scientific circles for suggesting 'Oumuamua, the first interstellar object ever detected back in 2017, was an alien spacecraft — laid out his latest theory about 3I/ATLAS. "It is well known to any interstellar traveler that there are plenty of icy rocks in planetary systems," Loeb wrote. "These constitute the leftover building blocks from the construction process of the planets." "For that reason, an alien might assume that any intelligent observer on Earth must be familiar with space rocks as they impact the Earth on a regular basis," he continued, before advising: "Not so fast." Though his case for 3I/ATLAS being a Turing Test sent by alien intelligence is pretty far-fetched, his insistence that his fellow scientists lack even human smarts is well-argued. Citing "terrestrial comet expert" Chris Lintott of Oxford, who insisted last month that Loeb's alien-origin theory is "nonsense on stilts, and is an insult to the exciting work going on to understand this object," the Harvard alien hunter suggested that any extraterrestrial being who encountered such dismissals could "justifiably conclude that humans failed the test and do not deserve a high status in the class of intelligent civilizations within the Milky-Way galaxy." Ouch. As the Harvard astronomer has contended, the scientific establishment at large may have gotten it wrong by immediately declaring 3I/ATLAS to be a comet. He pointed out new Hubble Space Telescope images of the object to back up his claim, which show a "glow" ahead of it, but "no prominent cometary tail behind it, as is the case for common comets." Loeb also pointed out that spectroscopic measurements do not indicate that there's any "molecular or atomic gas accompanying the glow around 3I/ATLAS," further undermining the theory that it's a comet. To Loeb, there is one predominant alternative explanation about the origins of 3I/ATLAS if it's not a comet: that it's "a technological object which targets the inner solar system," and even perfectly times its "arrival time" for "a close encounter with Mars, Venus and Jupiter." Obviously, in a Universe as vast and unknowable as our own, there are other, non-alien explanations for all those anomalous properties seen in 3I/ATLAS. To test his alien-origin theory out, Loeb has another, even more provocative proposal: sending a Morse code message to 3I/ATLAS, and seeing if anything (or anyone) responds. When speaking to reporters, the astronomer said that his ideal communiqué would be "Hello, welcome to our neighborhood. Peace!" As easy as it may be to dismiss Loeb's constant stream of strange alien claims, he does offer a fascinating alternate look at space rocks, and his notion that "3I/ATLAS is a blind date of interstellar proportions" is indeed pretty romantic to consider. "As an optimist, I prefer to approach it with a positive mindset," Loeb wrote of the interstellar object. "How we follow the initial greeting with alien intelligence would depend on the data we gather." More on 3I/ATLAS: Congressperson Urges NASA to Send Its Jupiter Probe Chasing in Pursuit of the Weird Visitor Coming From Interstellar Space Solve the daily Crossword

The app will see you now: New AI scans faces to predict diseases, disorders, and early death
The app will see you now: New AI scans faces to predict diseases, disorders, and early death

Yahoo

time2 days ago

  • Yahoo

The app will see you now: New AI scans faces to predict diseases, disorders, and early death

Facial recognition apps detect pain in dementia patients, trauma in children, and diagnose infections. I tried Harvard's FaceAge app using photos to estimate biological age and, in turn, overall health. This article is part of "Transforming Treatments," a series on medical innovations that save time, money, or discomfort. I look like I'm about 28. Or maybe 38. That's according to Harvard's "FaceAge" algorithm, which uses photos to determine a person's supposed biological age — meant as a quick proxy for wellness. This app is one of several new efforts to turn selfies into diagnostic tools. There's one for diagnosing nasal congestion, another for seasonal allergies, and a few safe driving apps that watch your face for signs of drowsiness. Some face scanners measure pain, illness, or signs of autism. One aims to track PTSD in kids to spare them from having to talk about traumatic issues over and over again. Since 2022, facial recognition for the clinic has blossomed, alongside rapid advancements in artificial intelligence and chipmaking. This year, new face technologies promising to diagnose diseases earlier, treat patients better, and ostensibly predict early death are taking off. "It's a medical biomarker, not just a gimmick," said FaceAge creator and radiologist Dr. Raymond Mak, who's leading the team at Harvard and Mass General Brigham developing facial recognition technology that Business Insider recently tested. Ethics experts worry about what we're barreling into, without better understanding exactly what this tech is measuring, or why. "AI is entering these spaces fast," Malihe Alikhani, an assistant professor of machine learning at Northeastern University, told Business Insider. "It's about making sure that it is safe and beneficial." Your face is a reflection of your health Our faces say a lot about our physical, mental, and biological health. While this is new territory for computers, humans have read faces to make quick judgments for thousands of years. Research suggests we developed a third type of cone in our eyeballs about 30 million years ago, specifically to scan each other's faces for signs of health or sickness. That cone allows us to read faces in shades of red and green. "People look at rosy cheeks and they see that as a sign of health. When we want to draw a face that's sick, we'd make it green," Professor Brad Duchaine, a neuroscientist who studies facial perception at Dartmouth, told Business Insider. It's true: A flush can indicate good blood flow, or high levels of carotenoids in the skin from fruits and veggies we eat. Dr. Bahman Guyuron, a plastic surgeon in Cleveland, studied identical twins with different lifestyles to see how factors like smoking and stress impact their faces. Consistently, the twin with more stress and more toxins in their bodies looked several years older. Sagging skin and wrinkles can reflect poorer internal health, with lower collagen production and higher levels of stress hormones. Conversely, studies show that superaging centenarians — whose organs and cells are working unbelievably efficiently — look, on average, about 27 years younger than they are. I tried a face scanning app One of the first medical applications of face-reading tech was Face2Gene, an app first released in 2014 that helps doctors diagnose genetic conditions. Studies suggest Face2Gene is better than human doctors at extracting information from a person's face and then linking those features to a specific genetic issue. The Australian app PainChek has tracked the pain of nursing home patients since 2017. It is mostly used for dementia patients who may not be able to verbalize pain. In a recent announcement, the company said it is awaiting FDA approval and could be cleared by September 2025. I wanted to try one of these apps for myself. Since I write a lot about aging, I decided to try FaceAge, Harvard's new app that ostensibly measures your biological age. It is not yet available for public use; it is being used as a research tool for now. The ultimate goal, researchers say, would be to use selfies to do better diagnostic work. FaceAge could one day improve cancer treatments by tailoring them to a patient's unique biology and health status, or maybe even help flag other health issues before they happen. The FaceAge algorithm pays attention to two main areas of the face: the nasal labial folds, from the nose to the lips, and the temples between the eyes and ears. The idea is to spot premature or accelerated signs of aging that could be a red flag for internal problems. "If your face age is accelerating quicker than your chronological age, it's a very poor prognostic sign," Mak, one of the developers behind FaceAge, told Business Insider. I submitted four photos to the app. In the darkest, blurriest photo I provided, the app thought I was 27.9 years old — a little more than a decade below my actual age. The picture I took with no makeup on, and my face thrust out into the bright mid-day sunshine, gave me the oldest FaceAge, even though all of these pictures were taken within the past year. One selfie taken in the dark of winter and another on a cloudy day ended up somewhere in the middle, making me look young-ish. Humans (and face-scanning apps) use the proliferation of lines, sharp edges, and more details to assess someone's age. That's why people look younger in blurry photos — or with surgery or makeup to smooth over their wrinkles. In "a really, really, really blurry photograph of a face, what you've done is you've stripped out all of the high spatial frequency information," neuroscientist Bevil Conway, a senior investigator at the National Institutes of Health, told Business Insider. Direct light, like a ring light, can help mask old age. The midday sun, coming down on my face from above, had the opposite effect. So, what did I learn from my experiment? FaceAge told me I'm looking great (and young!) and should keep up my healthy habits. Still, its assessments were all over the place. Face Age is confident each time you run it, but that confidence masks the fact that it can't really tell how well I'm aging over time. Is my body 10 years younger than me, or just one? While I do eat a relatively healthy diet and exercise regularly, I'm curious how much the differences in lighting affected my results. The ethics are blurry Even if it's something as seemingly innocuous as measuring your age, bringing AI into the doctor's office is fraught with ethical conundrums. "We have been through a few years now of companies coming up with these systems, selling them to hospitals, selling them to doctors, selling them to border protection, and then after a while they're like, 'oops,'" Alikhani, the AI ethics expert, said. Readers may remember the uproar over the highly controversial Stanford study that developed "gaydar" AI in 2017. The app purported to spot someone's sexual orientation. Critics said it was just picking up on social and environmental cues that had nothing to do with sexuality, like makeup, facial hair, and sun exposure. Another team of researchers from Shanghai Jiao Tong University developed an algorithm that promised to identify criminals and terrorists, or people with law-breaking tendencies. These efforts feel uncomfortably close to the pseudoscientific practice of physiognomy, a deeply flawed practice that's been used for centuries to justify racism and bigotry, Alikhani said. Facial expression is highly context-dependent, varying not only based on a person's gender and culture, but also by the individual and the moment, she said. "Better healthcare involves patients more in the decision-making," Alikhani said. "What are we doing if we're putting that in the hands of AI?" Read more from the Transforming Treatments series: In an era of quiet glow-ups, no-prep veneers are the new 'it' cosmetic procedure Colonoscopies are no fun. These at-home colon cancer screenings offer a shortcut. Skin tightening is getting more advanced — and less painful. Here are the new techniques replacing facelifts. Read the original article on Business Insider

Bad news: Study finds French fries can raise risk of this serious health condition by 20%
Bad news: Study finds French fries can raise risk of this serious health condition by 20%

Yahoo

time3 days ago

  • Yahoo

Bad news: Study finds French fries can raise risk of this serious health condition by 20%

A new study published in the BMJ found that a regular French Fry habit can raise your type 2 diabetes risk by up to 20 percent. Harvard Public Health researchers found that the way potatoes are prepped makes a big difference in risk; those who ate baked, boiled, or mashed potatoes didn't see a big change in risk. Potatoes themselves are nutrient-dense, say experts, so it's best to consume them in the context of balanced meals. French fries are technically made from a vegetable, putting them in a weird grey zone of health. But while diving into a basket of fries has arguable mental health perks, it's probably not doing your physical health any solids. Now, brand new research suggests that fries could negatively impact your health in a very specific way. The study, which was recently published in The BMJ, found that a regular French fry habit can raise your risk of type 2 diabetes by up to 20%. While preparing potatoes differently than the fried version was better for type 2 diabetes risk, the researchers still found potatoes aren't the best starch for your health. While this doesn't mean you should never, ever have fries or potatoes again (thankfully!), experts say it's a good idea to keep a few things in mind about the taters going forward. Meet the experts: Christoph Buettner, MD, PhD, is chief of the division of endocrinology at Rutgers Robert Wood Johnson Medical School; Jessica Cording, RD, CDN, is author of The Little Book of Game-Changers; Mir Ali, MD, is medical director of MemorialCare Surgical Weight Loss Center at Orange Coast Medical Center in Fountain Valley, CA What exactly did the study find? For the study, Harvard Public Health researchers looked at detailed information on the diets and health status of more than 205,000 people who participated in three longitudinal studies in the US. Those studies tracked their information for more than 30 years, and included details on how much they ate of French fries, baked, boiled, and mashed potatoes, and whole grains. During the 30-year study period, more than 22,200 people were diagnosed with type 2 diabetes. And, when the researchers drilled down on the data, they found that those who ate three servings of fries a week had a 20% higher risk of being diagnosed with type 2 diabetes. On the flip side, people who consumed baked, boiled, or mashed potatoes didn't see a big impact on their risk of developing type 2 diabetes. One more thing to point out: people who had whole grains (think: farro, whole grain pasta, and whole grain bread) instead of baked, boiled, or mashed potatoes had a 4% lower risk of being diagnosed with type 2 diabetes. And when people had whole grains instead of fries, that risk dropped by 19%. Okay, but what's wrong with fries? The study didn't determine the exact reason this link exists, but there are a few things that could be behind this. One is that eating a lot of fries could lead to weight gain, which is a known risk factor for type 2 diabetes, says Jessica Cording, RD, CDN, author of The Little Book of Game-Changers. 'Due to their high palatability, it's easy for many people to eat a large serving very quickly,' she points out. It's also possible that people who eat a lot of fries have other dietary habits that can raise the risk of type 2 diabetes, including a lot of foods that are high in refined starches, fats, and sugars, says Christoph Buettner, MD, PhD, chief of the division of endocrinology at Rutgers Robert Wood Johnson Medical School. 'So, the issue is less about French fries specifically, and more about the broader dietary context in which they're often consumed,' he says. Cording agrees. 'There is something to be said for context – what other foods and beverages are most commonly consumed with fries compared to what someone might have with whole grains or with boiled, baked, or mashed potatoes?' she says. It's hard to say for sure what's behind this, says Mir Ali, MD, medical director of MemorialCare Surgical Weight Loss Center at Orange Coast Medical Center in Fountain Valley, CA. 'Frying changes the nutritional quality of the potatoes and undoes the benefits of potatoes,' he says. 'Deep frying also creates chemicals that are less healthy for you. It's probably a combination of all of these.' So are potatoes healthy? These findings don't mean you shouldn't touch a potato again. 'Potatoes are actually inherently very nutrient dense,' Cording says. 'They contain vitamin A and potassium, for example.' But she suggests enjoying potatoes in the context of a balanced meal that provides protein, healthy fats, and high-fibre, non-starchy vegetables. 'While an occasional serving of French fries isn't something I'd tell someone to worry about, the majority of the time, I'd encourage reaching for roasted or baked potatoes,' she says. Buettner agrees. 'It's fine to eat a few French fries with a lot of vegetables and some protein,' he says. 'But you should not eat them by itself or as snacks.' You Might Also Like 13 Buys To Help You Feel Great From £5 16 Speedo Swimsuits that Won't Flash Your Bum When Getting Swim-fit 11 Best Gym Trainers for Different Types of Workouts

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store