logo
Humans think — AI, not so much. Science explains why our brains aren't just fancy computers

Humans think — AI, not so much. Science explains why our brains aren't just fancy computers

Yahoo25-04-2025
The world, and countless generations of interactions with it, coaxed our brains to evolve in the unique way that humans perceive reality. And yet, thanks to the past century's developments in cognitive science and now artificial intelligence, we have entrenched a view of the brain that doesn't spend much time on this dynamic. Instead, most of us tend to see our brains as a "network" made of undifferentiated brain cells. These neurons produce cognition by the patterns in which groups of them fire at once — a model that has inspired advanced computers and AI.
But accumulating discoveries of different specialized brain cells pose a challenge to models of human or artificial intelligence in which thoughts and concepts arise purely from the distributed firing of many essentially-identical brain cells in a network. Perhaps it's time to consider that if we want to replicate human intelligence, we ought to take a closer look at some of the amazing adaptations that have evolved in mammalian neurons — and specifically, neurons in the human brain. Instead of the popularly understood idea of the brain as a neural network of undifferentiated brain cells, research has increasingly found that different neurons, even of the same basic type, have their own specific functions and abilities.
In fact, in the modern, popular understanding of the brain, we really tend to think of this organ as a sophisticated version of the technology it inspired. Merriam-Webster defines neural network as "a computer architecture in which a number of processors are interconnected in a manner suggestive of the connections between neurons in a human brain and which is able to learn by a process of trial and error." This is a typical definition, in which the computer-brain analogy focuses on the distributed connections between neurons (or, in a computer, nodes) with no attention to what exactly those neurons are for.
It's a definition that has been good enough since the 1980s, when future Nobel Prize-winner Geoffrey Hinton and others picked up on an older idea called backpropagation, applying it as an algorithm that mimics human brains by systematically reducing errors through repeated iterations and thus allows for more efficient training of multilayer neural networks. This reinvigorated the earlier idea that a system of nodes and connections that mimics the human brain might work to create an artificial form of intelligence, leading to the deep learning models and machine learning we have today. Since the discipline of artificial intelligence latched onto the neural network, though, it's largely focused on developing different forms of artificial (or simulated) neural networks, and mostly moved away from studying the human or animal brain as an artifact of evolution with specifics worth mimicking.
But while it's true that that most neurons are important only for their firing or non-firing, not for their specific role, even as computer scientists have been expanding the things that artificial neural networks can do, research on the brain itself has continued. Over decades, and especially the past few years, individual pieces of research have gradually identified a host of different brain cell types, upending our simple image of the brain as merely a very powerful computer.
Instead, they reveal mammalian brains to be the product of millions of years of evolution and adaptation to environments. Over all those years, countless tiny changes have led animal brains to evolve a unique nervous system in which the key component, the neuron, is now able to represent our experiences and thoughts and surroundings in specific and wondrously clever ways not available to other animals who have not evolved our most recent adaptations. Our particular form of intelligence, it seems, depends on this minority of specialized neuron types.
Back in 2001, Yuri Arshavsky wrote, "I argue that individual neurons may play an important, if not decisive, role in performing cognitive functions of the brain." At that time the research was already accumulating, but the idea went counter to the prevailing view in neuroscience. By now, though, it's becoming hard to argue against Arshavsky's claim.
There are brain cells that represent entire concepts, some with an affinity for visual information and others for olfactory input. Scientists have also found neurons that can encode entire concepts with the firing of a single cell, or that are devoted to specific aspects of cognition and how we represent the world, and that fire when their particular function is needed: warm-sensitive neurons, place cells and related time cells, olfactory concept cells, visual concept cells, Lepr neurons that control metabolism... the list of discoveries is long and still growing.
New research looking into the already-brewing notion of time and space-encoding cells demonstrates how different cell types work together to give us both "what" and "where" information that allows our brains to represent our experience of time. Researchers still haven't decided how best to classify all the different types, but they are increasingly trying to map the specific kinds of input they encode through the patterns of which neurons fire when, and the relationships between the different representations this creates.
"I do agree that today's AI models have important deficiencies — and among them might be that they lack some of the predispositions various parts of our brains may have," Jay McClelland, a noted cognitive scientist at Stanford University, told Salon in an email.
AI is doing incredible (and destructive) things these days, solving impossible medical problems and generating imagery that manages the trick of being simultaneously trite and bizarre. The computing power this requires sucks water from a parched earth and puts entire creative industries out of work. AI models that act as "artificial brains" are able to do therapy, provide health care or write (in a manner of speaking.) But there are ways in which the large language models and similar generative AI are missing not simply the feeling of being human, but the actual function.
Most of our understanding of how the brain works at the single neuron level — equivalent to a node in an artificial neural network — comes from studies of murine (mouse or rat) or primate models, because it isn't considered ethical to do brain surgery on humans just to find out what interesting things are going on in our brains. So it's only with the recent development of a technique that allows for single neuron recordings to be taken during unavoidably necessary brain surgery done on epilepsy patients for diagnostic purposes that researchers gained subjects who would be available for perhaps a week at a time to look at things and talk about them while scientists recorded which neurons fired, how intensely, and for how long.
This is a very particular situation, but luckily there are many people with epilepsy of different kinds, and a subset of them need electrodes implanted in their brains to record their spontaneous seizures over the course of a week or two so as to figure out if they are a candidate for surgery to cure them. These implants are done in two parts of the brain that often produce seizures, the medial temporal lobe and the medial frontal lobe.The majority of brain cells are neurons, while some cells have other functions. But the exact number of neuron types is unknown, although recent research in human brains has identified at least two million neurons, which researchers were able to categorize into different types: "31 superclusters, 461 clusters, and 3313 subclusters," resulting in a massive number of individual types. It's remarkably different from the simple three neuron types classification — motor neurons, sensory neurons, and interneurons — one might have learned in a cursory overview of brain science.
Itzhak Fried, lead author on the newly published research on time and space cells, is a neurosurgeon at UCLA whose lab, and postdoctoral students trained there, spawned many of the major discoveries of these specialized neuron types. Fried told Salon about the two decades of research, or more, that have led to the profusion of concept cells and other neuron types we now understand to play critical roles in encoding and representing our experience with the world.
Not just with the world, but with our imaginations, and experiences that now live only in memory rather than being triggered by external stimuli. Fried cited the work of Hagar Gelbard-Sagiv, a postdoc in his lab, who, as described in a 2008 paper, found that when subjects were shown a variety of film clips while researchers recorded the activity of single neurons in their hippocampus and surrounding areas, a subset of those neurons fired in response to a particular concept — there was one neuron, for example, that began firing at the start of a clip from "The Simpsons" and continued firing despite the changing images on the screen. That is, it responded not to a specific image but to the general Simpsons concept — and not to any other videos that weren't Simpsons-related.
Even more remarkable was that when the movie-watchers were asked to tell the researchers what they'd seen, they would begin to describe the assortment of 20-odd movie clips they'd been shown, and that particular neuron would fire during the actual act of remembering the Simpsons video.
"After we presented, let's say, 20 videos ... we said to the patient, 'Just tell us what you say, okay?' She says, 'well, you know, I remember Martin Luther King's speech, and I saw a landing on the moon'. And suddenly the Simpsons neuron started firing and then a second later, [the patient] says 'The Simpsons'," Fried recalled for Salon. "It's as if there was some process going on [that] she didn't even realize yet, as there was already a signature of that memory. Obviously there was no sensory input. She was completely locked in her mind. And that concept neuron started firing, and the memory came out, essentially emerged at the conscious level."
In some ways, we do work like computers and use distributed networks of firing neurons in important ways. In fact, most parts of the brain work like that, Dr. Florian Mormann, a cognitive and clinical neurophysiologist at the University of Bonn who conducts single neuron recordings on epilepsy patients (and who was a postdoc in Fried's lab), told Salon in a video interview. "One control region we have in the visual pathway is the parahippocampal cortex, which indeed features a distributed network code, which is what most of the brain regions do."
And in the Simpsons neuron case, for example, it was just a subset of neurons in the medial temporal lobe that behaved with extreme specificity to enable patients to quickly grasp the relevant concept. Just a single neuron could determine the patient's memory that a video of, say, Itchy and Scratchy, or of Moe's bar, or of a three-eyed fish at the Springfield nuclear power plant, was a video about the Simpsons.
AI just doesn't work like that. Instead, it analyzes large amounts of data to detect patterns, and its algorithms rely on the statistical probability of a particular decision being the right one. Incorrectly chosen, biased or inadequately large data sets can result in the famous "hallucinations" to which AI models are prone.
"It comes to a fundamental issue about what sort of a system do we need to model intelligence," McClelland explained in a keynote talk, Fundamental Challenges for AI, that he delivered last April at the Computer History Museum in Palo Alto, CA.
Writing to Salon, he offered the example of place cells, the specialized neuron he's most familiar with.
"There are different views, but the role and nature of so-called place cells is extremely nuanced. These cells can code for all kinds of things in tasks that aren't simply requiring animals to move around in space," McClelland said.
McClelland pointed out that the differences between human brains and artificial intelligence systems include how we learn. Indeed, learning and the necessary process of memory formation and retrieval are key to the specialized roles played by concept cells and some of our other specialized neurons.
"I also think that our brains use far different learning algorithms than our current deep learning systems," McClelland said. "I'm taking inspiration from a fairly-recently discovered new form of learning called Behavioral Time Scale Synaptic plasticity [BTSP] to think about our brains might have come to be able to learn with far less training data than we currently need to train contemporary AI systems."
The pattern recognition that allows AI to learn is based on something called Hebbian-style synaptic plasticity, based on Donald Hebb's idea that learning arises through repeated use of the same connections between neurons in the brain: repeated activation strengthens the efficiency of cells firing together. The term "synaptic plasticity" just means the ability of these connections to be strengthened or otherwise changed.
"The prevailing theories of the 20th century and later all proposed that the primary mechanism of CA3 ensemble or attractor formation was Hebbian style synaptic plasticity, based on correlated AP [action potential, or neurons firing] activity," write the authors of a study published in Cell in November that explored the dynamics of neurons contributing to memory formation. Hebbian-style synaptic plasticity allows for creation of memories and learning from experience within a network of neurons and synapses.
This is the basic understanding that underlies deep learning models used in AI. But the authors of the Cell study propose that in human brains, what's actually going on is a different form of synaptic plasticity, BTSP, which allows for far fewer firings of neurons to create a memory — in fact, you might just need a single "event" to result in learning. Like another hypothesis for how neurons do their thing, the sparse coding hypothesis, BTSP works well because it doesn't need the kind of overlap that Hebbian-style plasticity requires.
Concepts in the human brain, as we've seen, can be encoded with just a small number of neurons firing, or even just one, Mormann explained: "So when I say sparse versus network, or sparse versus distributed, that means that [most] neurons are silent, and then just a few neurons suddenly say 'Look, this is my favorite stimulus.' It indicates that that stimulus is there."
A reason evolution might allow itself "the luxury of having these sparse representations" when network codes would be more efficient, Mormann suggested, is that "they actually provide the semantic building blocks that are being pieced together to form mnemonic episodes." That is, our episodic memories are pieced together from a small number of concepts embellished by the brain's tendency to make up less important details, or to remain fuzzy about them.
"The only things that are really reliable and can be reliably tested are a few core semantic facts, and those are the ones that we believe are represented or provided by concept neurons, and they are being pieced together to form episodic memories," Mormann said.
Although we have not yet created a complete picture of how humans represent experience, including through the apparently vital roles played by concept cells, place or grid cells, time cells and other specialized cell types, it's becoming clearer that neurons in animals have evolved unique adaptations. Researchers have, for example, identified thousands of specialized neurons in mice. But they help them do mouse things. In humans, culture, language, care, tools and other still-to-be demonstrated ways in which we interact with the world around us has produced specializations that let us encode entire concepts and think in an abstract way, internally representing our experiences.
So AI might do well to look back at how the world has shaped us, letting us do human things by the way our brains now make the world.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mike Rowe says US in a ‘modern-day Manhattan project' after scrapping shop class — is it time to ditch your desk job?
Mike Rowe says US in a ‘modern-day Manhattan project' after scrapping shop class — is it time to ditch your desk job?

Yahoo

time6 hours ago

  • Yahoo

Mike Rowe says US in a ‘modern-day Manhattan project' after scrapping shop class — is it time to ditch your desk job?

With billions of dollars pouring in to fuel America's contributions to the ongoing Artificial Intelligence race, Mike Rowe believes we are living through what he calls a 'modern-day Manhattan Project.' However, America's AI efforts face their own set of hurdles, including a lack of skilled tradespeople to build the energy and data center infrastructure needed to power AI. Don't miss Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how I'm 49 years old and have nothing saved for retirement — what should I do? Don't panic. Here are 6 of the easiest ways you can catch up (and fast) You don't have to be a millionaire to gain access to this $1B private real estate fund. In fact, you can get started with as little as $10 — here's how 'When we took shop class out of high school, we sent a pretty clear message to the workforce," Rowe, the host of Dirty Jobs, told Fox News. "We put our thumb on the scale, we made a real judgement call. Consequently, we have 7.6 million open jobs right now in those [skilled labor] fields.' And Rowe isn't the only one ringing this alarm bell. Nobel laureate Geoffrey Hinton — often called the 'Godfather of AI' — also recommended blue collar skills as a way to prepare for the age of automation. 'I'd say it's going to be a long time before [AI] is as good at physical manipulation as us, and so a good bet would be to be a plumber,' Hinton said on The Diary of a CEO podcast. As more experts urge Americans to rethink their career paths, could now be the moment to consider rolling up your sleeves and going to trade school? Here's what you need to know. The pros Perhaps the best reason to consider a career in the trades is the projected shortage of skilled labor workers in the near future. Nearly 1.9 million manufacturing jobs could go unfilled over the next decade, according to a study from the Manufacturing Institute and Deloitte. Meanwhile, America could be short about 550,000 plumbers by 2027, according to The Hill. Put simply, there's growing demand for blue collar workers while white collar staffers face dwindling demand and potential layoffs. As a consequence, skilled trade workers have more bargaining power to achieve better wages. Between 2020 and 2024, average wages across the skilled trades sectors grew 20%, according to McKinsey & Company. Experienced electricians, elevator installers, construction managers and HVAC technicians can earn $100,000 or more per year, according to the Philadelphia Technician Training Institute. And the fact that these trades do not require an advanced college degree also reduces the burden on young people willing to acquire these skills. Trade schools are relatively inexpensive, and organizations such as Rowe's mikeroweWORKS Foundation have helped thousands of young professionals achieve six-figure salaries without taking on massive student debt. However, before you ditch your cubicle for a construction site, it's worth considering some of the drawbacks of this bold career move. Read more: Want an extra $1,300,000 when you retire? Dave Ramsey says — and that 'anyone' can do it The Cons Being a blue collar worker could be lucrative, but only after you've gained sufficient skills and experience. For instance, the average apprenticeship for construction workers could be slightly longer than four years, according to the American Apprenticeship Initiative. Making matters worse, your earnings are likely to start relatively low during your apprenticeship, which means it could be several years before you crack the six-figure threshold. Another disadvantage with blue collar work is that it is highly physical and may be better suited for younger workers. Mining, construction and agricultural workers have relatively high rates of workplace injuries, according to the AFL-CIO. These risks are higher for older workers, which could push tradespeople to consider earlier retirement than their white collar peers. Finally, the current lack of automation in the trades might be temporary. Several tech giants and startups are working on humanoid robots and there are already prototypes of autonomous machines for construction, as well as shipbuilding and welding. Amazon already has roughly a million robots in its warehouses and the company's workforce could be more robotic than human in the near future, according to the Wall Street Journal. The bottom line There are plenty of advantages and disadvantages to switching your career to the trades. If you're young and enthusiastic about gaining hands-on experience, this could be the right move. But if you've already built soft skills over several years and don't find physical labor enjoyable, this might not be the best move for you. What to read next Robert Kiyosaki warns of a 'Greater Depression' coming to the US — with millions of Americans going poor. But he says these 2 'easy-money' assets will bring in 'great wealth'. How to get in now Accredited investors can now buy into this $22 trillion asset class once reserved for elites – and become the landlord of Walmart, Whole Foods or Kroger without lifting a finger. Here's how Rich, young Americans are ditching the stormy stock market — here are the alternative assets they're banking on instead Here are 5 'must have' items that Americans (almost) always overpay for — and very quickly regret. How many are hurting you? Stay in the know. Join 200,000+ readers and get the best of Moneywise sent straight to your inbox every week for free. This article provides information only and should not be construed as advice. It is provided without warranty of any kind. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The Godfather of AI says most tech leaders downplay the risks — except one
The Godfather of AI says most tech leaders downplay the risks — except one

Business Insider

timea day ago

  • Business Insider

The Godfather of AI says most tech leaders downplay the risks — except one

The "Godfather of AI" said many people at tech companies publicly "downplay" the risks. He named one tech leader, though, who is aware of and trying to address the dangers. Geoffrey Hinton called many of the tech leaders "oligarchs." Geoffrey Hinton, the ex-Google employee known as the "Godfather of AI" for his work on neural networks, has been vocal about the risks of the technology. He said on a recent episode of the "One Decision" podcast that "most" people at tech companies understand the risks, but don't act on them. "Many of the people in big companies, I think, are downplaying the risk publicly," Hinton said on the episode, which aired on July 24. But he mentioned one tech leader who is attuned to the potential dangers of the technology. " Demis Hassabis, for example, really does understand about the risks, and really wants to do something about it," he said. Hassabis is the CEO of Google DeepMind, the company's main AI lab. He cofounded DeepMind in 2010 and sold it to Google in 2014 for $650 million, under the caveat that the tech giant would create an AI ethics board. A Nobel Prize winner, Hassabis had for years hoped that academics and scientists would lead the AI scramble. Now, he's at the center of Google's push for AI dominance, and some company insiders previously told Business Insider they think he might be in the running for CEO. In February, Hassabis said that AI poses long-term risks and warned that agentic systems could get "out of control." He has pushed for having an international governing body to regulate the technology. Late last month, protesters demonstrated outside DeepMind's London office to demand more AI transparency. Hinton spent more than a decade at Google himself before quitting to discuss the dangers of AI more openly. He said on a previous podcast episode that the company had encouraged him to stay and work on safety issues. The so-called Godfather didn't heap much praise on other Big Tech leaders — earlier in the podcast, he said that "the people who control AI, people like Musk and Zuckerberg, they are oligarchs." Representatives for Musk and Zuckerberg did not respond to BI's request for comment. And as to the question of whether he trusts them? "I think when I called them oligarchs, you know the answer to that."

The Godfather of AI says most tech leaders downplay the risks — except one
The Godfather of AI says most tech leaders downplay the risks — except one

Business Insider

timea day ago

  • Business Insider

The Godfather of AI says most tech leaders downplay the risks — except one

There doesn't seem to be much godfatherly love in the AI world these days. Geoffrey Hinton, the ex-Google employee known as the "Godfather of AI" for his work on neural networks, has been vocal about the risks of the technology. He said on a recent episode of the "One Decision" podcast that "most" people at tech companies understand the risks, but don't act on them. "Many of the people in big companies, I think, are downplaying the risk publicly," Hinton said on the episode, which aired on July 24. But he mentioned one tech leader who is attuned to the potential dangers of the technology. " Demis Hassabis, for example, really does understand about the risks, and really wants to do something about it," he said. Hassabis is the CEO of Google DeepMind, the company's main AI lab. He cofounded DeepMind in 2010 and sold it to Google in 2014 for $650 million, under the caveat that the tech giant would create an AI ethics board. A Nobel Prize winner, Hassabis had for years hoped that academics and scientists would lead the AI scramble. Now, he's at the center of Google's push for AI dominance, and some company insiders previously told Business Insider they think he might be in the running for CEO. In February, Hassabis said that AI poses long-term risks and warned that agentic systems could get "out of control." He has pushed for having an international governing body to regulate the technology. Late last month, protesters demonstrated outside DeepMind's London office to demand more AI transparency. Hinton spent more than a decade at Google himself before quitting to discuss the dangers of AI more openly. He said on a previous podcast episode that the company had encouraged him to stay and work on safety issues. The so-called Godfather didn't heap much praise on other Big Tech leaders — earlier in the podcast, he said that "the people who control AI, people like Musk and Zuckerberg, they are oligarchs." Representatives for Musk and Zuckerberg did not respond to BI's request for comment. And as to the question of whether he trusts them? "I think when I called them oligarchs, you know the answer to that."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store