logo
#

Latest news with #brainactivity

Optimists share similar brain patterns when thinking about the future, scans show
Optimists share similar brain patterns when thinking about the future, scans show

The Guardian

time21-07-2025

  • Health
  • The Guardian

Optimists share similar brain patterns when thinking about the future, scans show

Whether it's an exam, flight or health check, some people take a sunny view of the future while others plan for catastrophes. Now researchers have found that people with an upbeat outlook show similar patterns of brain activity when they mull over future scenarios. 'Optimists seem to use a shared neural framework for organising thoughts about the future, which likely reflects a similar style of mental processing rather than identical ideas,' said Kuniaki Yanagisawa, first author of the research from Kobe University in Japan. He said the results could shed light on previous findings that showed optimists tended to be more socially successful. 'What this [new study] tells us is that the foundation of their social success might be this shared reality,' he added. 'It's not just about having a positive attitude; it's that their brains are literally on the same wavelength, which may allow for a deeper, more intuitive kind of connection.' In the Proceedings of the National Academy of Sciences, the researchers said they asked 87 participants to complete a questionnaire to reveal how optimistic they were. Each participant also underwent an MRI brain scan, during which they were asked to imagine various different possible future life events, some of which were positive – such as taking an 'epic trip around the world' – while others were neutral or negative, such as being fired. A subset of the participants were asked to imagine scenarios related to death. The team found participants who were more optimistic showed greater similarities in the patterns of their brain activity in a region involved in future-oriented thinking, called the medial prefrontal cortex (MPFC) Yanagisawa said one possibility was that the more diverse brain activity among pessimists reflected a more varied set of concerns when thinking about negative scenarios. However, he said another possibility was that optimists viewed their futures within a shared framework of socially accepted goals which pessimists might feel disconnected from for personal reasons, meaning they each had a different way of thinking about the future. The researchers said the results had parallels with the first line of Leo Tolstoy's novel Anna Karenina: 'Happy families are all alike; every unhappy family is unhappy in its own way.' 'On the basis of this principle, we propose that optimistic individuals are all alike, but each less optimistic individual imagines the future in their own way,' the team wrote. The researchers also found patterns of brain activity in the MPFC showed clearer differences for positive and negative future events in optimists. 'This suggests that optimists not only 'think alike' in a structural sense, but they also process emotional information about the future differently with a greater ability to separate what's good from what's bad, which may help them stay resilient,' said Yanagisawa. He said previous work had associated this kind of clearer separation with a more abstract, psychologically distant way of thinking about negative events. 'We're not saying that optimists have identical thoughts about the future, or that they imagine the exact same scenarios,' said Yanadisawa. 'Rather, what we found is that their brains represent future events in a similar way, especially in how they distinguish between positive and negative possibilities. So while we wouldn't say they have the same thoughts, we can say that they appear to think in the same way – structurally.' Prof Lisa Bortolotti of the University of Birmingham, in the UK, who was not involved in the work, said the study suggested optimists pictured future negative events in less vivid and concrete detail than positive ones, meaning such potential scenarios affected them less. 'These findings might suggest that optimism does not amount to a form of irrationality or reality distortion because it does not change how we see things out there but how those things impact us,' she added. Bortolotti said assuming things won't go wrong brought no benefits if it left us unprepared for challenges, but noted that optimism worked when it motivated us to pursue goals. 'Picturing a positive outcome in detail as feasible and desirable makes us value it and work for it, ultimately making it more likely that we will achieve it,' she said.

Optimists share similar brain patterns when thinking about the future, scans show
Optimists share similar brain patterns when thinking about the future, scans show

The Guardian

time21-07-2025

  • Health
  • The Guardian

Optimists share similar brain patterns when thinking about the future, scans show

Whether it's an exam, flight or health check, some people take a sunny view of the future while others plan for catastrophes. Now researchers have found that people with an upbeat outlook show similar patterns of brain activity when they mull over future scenarios. 'Optimists seem to use a shared neural framework for organising thoughts about the future, which likely reflects a similar style of mental processing rather than identical ideas,' said Kuniaki Yanagisawa, first author of the research from Kobe University in Japan. He said the results could shed light on previous findings that showed optimists tended to be more socially successful. 'What this [new study] tells us is that the foundation of their social success might be this shared reality,' he added. 'It's not just about having a positive attitude; it's that their brains are literally on the same wavelength, which may allow for a deeper, more intuitive kind of connection.' In the Proceedings of the National Academy of Sciences, the researchers said they asked 87 participants to complete a questionnaire to reveal how optimistic they were. Each participant also underwent an MRI brain scan, during which they were asked to imagine various different possible future life events, some of which were positive – such as taking an 'epic trip around the world' – while others were neutral or negative, such as being fired. A subset of the participants were asked to imagine scenarios related to death. The team found participants who were more optimistic showed greater similarities in the patterns of their brain activity in a region involved in future-oriented thinking, called the medial prefrontal cortex (MPFC) Yanagisawa said one possibility was that the more diverse brain activity among pessimists reflected a more varied set of concerns when thinking about negative scenarios. However, he said another possibility was that optimists viewed their futures within a shared framework of socially accepted goals which pessimists might feel disconnected from for personal reasons, meaning they each had a different way of thinking about the future. The researchers said the results had parallels with the first line of Leo Tolstoy's novel Anna Karenina: 'Happy families are all alike; every unhappy family is unhappy in its own way.' 'On the basis of this principle, we propose that optimistic individuals are all alike, but each less optimistic individual imagines the future in their own way,' the team wrote. The researchers also found patterns of brain activity in the MPFC showed clearer differences for positive and negative future events in optimists. 'This suggests that optimists not only 'think alike' in a structural sense, but they also process emotional information about the future differently with a greater ability to separate what's good from what's bad, which may help them stay resilient,' said Yanagisawa. He said previous work had associated this kind of clearer separation with a more abstract, psychologically distant way of thinking about negative events. 'We're not saying that optimists have identical thoughts about the future, or that they imagine the exact same scenarios,' said Yanadisawa. 'Rather, what we found is that their brains represent future events in a similar way, especially in how they distinguish between positive and negative possibilities. So while we wouldn't say they have the same thoughts, we can say that they appear to think in the same way – structurally.' Prof Lisa Bortolotti of the University of Birmingham, in the UK, who was not involved in the work, said the study suggested optimists pictured future negative events in less vivid and concrete detail than positive ones, meaning such potential scenarios affected them less. 'These findings might suggest that optimism does not amount to a form of irrationality or reality distortion because it does not change how we see things out there but how those things impact us,' she added. Bortolotti said assuming things won't go wrong brought no benefits if it left us unprepared for challenges, but noted that optimism worked when it motivated us to pursue goals. 'Picturing a positive outcome in detail as feasible and desirable makes us value it and work for it, ultimately making it more likely that we will achieve it,' she said.

Has AI already rotted my brain?
Has AI already rotted my brain?

Fast Company

time27-06-2025

  • Science
  • Fast Company

Has AI already rotted my brain?

Five years ago, I bought an e-bike. At the time, the motor-equipped two-wheelers were burdened with an iffy reputation. Was it way easier to get up a hill on one than on a bike without a battery? Absolutely. Did that mean people who rode them were lazy or even cheaters? Some cycling enthusiasts thought so. But what if the boost provided by your e-bike motivated you to make longer trips and more of them—all powered, in part, by your own pedaling? Having logged almost 10,000 miles on my Gazelle, I'm certain it's been a guilt-free boon to my well-being. Data backs me up. I thought about that recently while reading about a new study conducted at MIT's Media Lab. Researchers divided subjects ages 18 to 39 into three groups and had them write essays on topics drawn from the SAT questions answered by college applicants, such as 'Do works of art have the power to change people's lives?' One group relied entirely on unassisted brainpower to complete the essay. A second group could use a search engine. And the third could call on ChatGPT. The study subjects wore EEG helmets that captured their brain activity as they worked. After analyzing that data, the researchers concluded that access to ChatGPT didn't just make composing an essay easier. It made it too easy, in ways that might negatively impact people's long-term ability to think for themselves. In some cases, the ChatGPT users merely cut and pasted text the chatbot had generated; not surprisingly, they exhibited little sense of ownership over the finished product compared to those who didn't have a computerized ghost on tap. 'Due to the instant availability of the response to almost any question, LLMs can possibly make a learning process feel effortless, and prevent users from attempting any independent problem solving,' the researchers wrote in their report. 'By simplifying the process of obtaining answers, LLMs could decrease student motivation to perform independent research and generate solutions. Lack of mental stimulation could lead to a decrease in cognitive development and negatively impact memory.' The study reached those sobering conclusions in the context of young people growing up in an era of bountiful access to AI. But the alarms it set off also left me worried about the technology's impact on my own brain. I have long considered AI an e-bike for my mind—something that speeds it through certain tasks, thereby letting it go places previously out of reach. What if it's actually so detrimental to my mental acuity that I haven't even noticed my critical faculties withering away? After pondering that worst-case scenario for a while, I calmed down. Yes, consistently opting for the most expedient way to accomplish work rather than the one that produces the best results is no way to live. Sure, being overly reliant on ChatGPT—or any form of generative AI—has its hazards. But I'm pretty confident it's possible to embrace AI without your reasoning skills atrophying. No single task can represent all the ways people engage with AI, and the one the MIT researchers chose—essay writing—is particularly fraught. The best essays reflect the unique insight of a particular person: When students take the actual SAT for real, they aren't even allowed to bring a highlighter, let alone a bot. We don't need EGG helmets to tell us that people who paste ChatGPT's work into an essay they've nominally written have lost out on the learning opportunity presented by grappling with a topic, reaching conclusions, and expressing them for oneself. However, ChatGPT and its LLM brethren also excel at plenty of jobs too mundane to feel guilty about outsourcing. Each week, for example, I ask Anthropic's Claude to clean up some of the HTML required to produce this newsletter. It handles this scut work faster and more accurately than I can. I'm not sure what my brain waves would reveal, but I'm happy to reinvest any time not spent on production drudgery into more rewarding aspects of my job. Much of the time, AI is most useful not as a solution but a starting point. Almost never would I ask a chatbot about factual information, get an answer, and call it a day. They're still too error-prone for that. Yet their ease of use makes them an inviting way to get rolling on projects. I think of them as facilitating the research before the old-school research I usually end up doing. And sometimes, AI is a portal into adventures I might otherwise never have taken. So far in 2025, my biggest rabbit hole has been vibe coding —coming up with ideas for apps and then having an LLM craft the necessary software using programming tools I don't even understand. Being exposed to technologies such as React and TypeScript has left me wanting to learn enough about them to do serious coding on my own. If I do, AI can take credit for sparking that ambition. I'm only so Pollyanna-ish about all this. Over time, the people who see AI as an opportunity to do more thinking—not less of it—could be a lonely minority. If so, the MIT researchers can say 'We told you so.' Case in point: At the same time the MIT study was in the news, word broke that VC titan Andreessen Horowitz had invested $15 million in Cluely, a truly dystopian startup whose manifesto boasts its aim of helping people use AI to 'cheat at everything' based on the theory that 'the future won't reward effort.' Its origin story involves cofounder and CEO Roy Lee being suspended from Columbia University after developing an app for cheating on technical employment interviews. Which makes me wonder how Lee would feel about his own candidates misleading their way into job offers. With any luck, the future will turn out to punish Cluely's cynicism. But the company's existence—and investors' willingness to shower it with money—says worse things about humankind than about AI. You've been reading Plugged In, Fast Company 's weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you're reading it on can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@ with your feedback and ideas for future newsletters. I'm also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.

Using AI makes you stupid, researchers find
Using AI makes you stupid, researchers find

Telegraph

time17-06-2025

  • Science
  • Telegraph

Using AI makes you stupid, researchers find

Artificial intelligence (AI) chatbots risk making people less intelligent by hampering the development of critical thinking, memory and language skills, research has found. A study by researchers at the Massachusetts Institute of Technology (MIT) found that people who relied on ChatGPT to write essays had lower brain activity than those who used their brain alone. The group who used AI also performed worse than the 'brain-only' participants in a series of tests. Those who had used AI also struggled when asked to perform tasks without it. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the paper said. Researchers warned that the findings raised 'concerns about the long-term educational implications' of using AI both in schools and in the workplace. It adds to a growing body of work that suggest people's brains switch-off when they use AI. 'Human thinking offloaded' The MIT study monitored 54 people who were asked to write four essays. Participants were divided into three groups. One wrote essays with the help of ChatGPT, another used internet search engines to conduct research and the third relied solely on brainpower. Researchers then asked them questions about their essays while performing so-called electroencephalogram (EEG) scans that measured activity in their brains. Those who relied on ChatGPT, a so-called 'large language model' that can answer complicated questions in plain English, 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring', the researchers said. The EEG scans found that 'brain connectivity systematically scaled down with the amount of external support' and was weakest in those who were relying on AI chatbots to help them write essays. The readings in particular showed reduced 'theta' brainwaves, which are associated with learning and memory formation, in those using chatbots. 'Essentially, some of the 'human thinking' and planning was offloaded,' the study said. The impact of AI contrasted with the use of search engines, which had relatively little effect on results. Of those who has used the chatbot, 83pc failed to provide a single correct quote from their essays – compared to around 10pc in those who used a search engine or their own brainpower. Participants who relied on chatbots were able to recall very little information about their essays, suggesting either they had not engaged with the material or had failed to remember it. Those using search engines showed only slightly lower levels of brain engagement compared to those writing without any technical aides and similar levels of recall. Impact on 'cognitive muscles' The findings will fuel concerns that AI chatbots are causing lasting damage to our brains. A study by Microsoft and Carnegie Mellon, published in February, found that workers reported lower levels of critical thinking when relying on AI. The authors warned that overuse of AI could leave cognitive muscles 'atrophied and unprepared' for when they are needed. Nataliya Kosmyna, the lead researcher on the MIT study, said the findings demonstrated the 'pressing matter of a likely decrease in learning skills' in those using AI tools when learning or at work. While the AI-assisted group was allowed to use a chatbot in their first three essays, in their final session they were asked to rely solely on their brains. The group continued to show lower memory and critical thinking skills, which the researchers said highlighted concerns that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving'. The essays written with the help of ChatGPT were also found to be homogenous, repeating similar themes and language. Researchers said AI chatbots could increase 'cognitive debt' in students and lead to 'long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity'. Teachers have been sounding the alarm that pupils routinely cheating on tests and essays using AI chatbots. A survey by the Higher Education Policy Institute in February found 88pc of UK students were using AI chatbots to help with assessments and learning and that 18pc had directly plagiarised AI text into their work.

Using AI makes you stupid, researchers find
Using AI makes you stupid, researchers find

Yahoo

time17-06-2025

  • Science
  • Yahoo

Using AI makes you stupid, researchers find

Artificial intelligence (AI) chatbots risk making people less intelligent by hampering the development of critical thinking, memory and language skills, research has found. A study by researchers at the Massachusetts Institute of Technology (MIT) found that people who relied on ChatGPT to write essays had lower brain activity than those who used their brain alone. The group who used AI also performed worse than the 'brain-only' participants in a series of tests. Those who had used AI also struggled when asked to perform tasks without it. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the paper said. Researchers warned that the findings raised 'concerns about the long-term educational implications' of using AI both in schools and in the workplace. It adds to a growing body of work that suggest people's brains switch-off when they use AI. The MIT study monitored 54 people who were asked to write four essays. Participants were divided into three groups. One wrote essays with the help of ChatGPT, another used internet search engines to conduct research and the third relied solely on brainpower. Researchers then asked them questions about their essays while performing so-called electroencephalogram (EEG) scans that measured activity in their brains. Those who relied on ChatGPT, a so-called 'large language model' that can answer complicated questions in plain English, 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring', the researchers said. The EEG scans found that 'brain connectivity systematically scaled down with the amount of external support' and was weakest in those who were relying on AI chatbots to help them write essays. The readings in particular showed reduced 'theta' brainwaves, which are associated with learning and memory formation, in those using chatbots. 'Essentially, some of the 'human thinking' and planning was offloaded,' the study said. The impact of AI contrasted with the use of search engines, which had relatively little effect on results. Of those who has used the chatbot, 83pc failed to provide a single correct quote from their essays – compared to around 10pc in those who used a search engine or their own brainpower. Participants who relied on chatbots were able to recall very little information about their essays, suggesting either they had not engaged with the material or had failed to remember it. Those using search engines showed only slightly lower levels of brain engagement compared to those writing without any technical aides and similar levels of recall. The findings will fuel concerns that AI chatbots are causing lasting damage to our brains. A study by Microsoft and Carnegie Mellon, published in February, found that workers reported lower levels of critical thinking when relying on AI. The authors warned that overuse of AI could leave cognitive muscles 'atrophied and unprepared' for when they are needed. Nataliya Kosmyna, the lead researcher on the MIT study, said the findings demonstrated the 'pressing matter of a likely decrease in learning skills' in those using AI tools when learning or at work. While the AI-assisted group was allowed to use a chatbot in their first three essays, in their final session they were asked to rely solely on their brains. The group continued to show lower memory and critical thinking skills, which the researchers said highlighted concerns that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving'. The essays written with the help of ChatGPT were also found to be homogenous, repeating similar themes and language. Researchers said AI chatbots could increase 'cognitive debt' in students and lead to 'long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity'. Teachers have been sounding the alarm that pupils routinely cheating on tests and essays using AI chatbots. A survey by the Higher Education Policy Institute in February found 88pc of UK students were using AI chatbots to help with assessments and learning and that 18pc had directly plagiarised AI text into their work. OpenAI, the developer of ChatGPT, was contacted for comment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store