13-02-2025
Better AI for Alexa, Tesla? Princeton team cracks brain's decision-making code
Imagine that you are crossing a busy street at a crosswalk, and out of nowhere, a car suddenly blares its siren. Your head would automatically swivel around, you might even take a step back or forward.
The human brain has to process a ton of information to make split-second decisions. How does it compute conflicting and similar sensory cues to make the most optimal choice?
A new study from Princeton neuroscientists provides a fresh perspective into this complex process. Their findings could not only improve our understanding of decision-making in the human brain but could also advance artificial intelligence systems, like self-driving cars and virtual assistants.
'The goal of the research was to understand if low-dimensional mechanisms were operating inside large recurrent neural networks,' said study author Christopher Langdon.
In the latest study, the researchers came up with the 'latent circuit' model, which posits that only a few select neurons referred to as 'ringleader' neurons are responsible for decision-making, as opposed to studying the whole web of interconnected neurons.
This method, called the 'low-dimensional' model, changes the way in which brain computations are understood. To validate their model, Langdon and Engel used a decision making scenario commonly employed within humans as well as other animals.
In this task, participants first see a shape on a screen (a square or triangle) that serves as a context cue. Then, they view a moving grid of dots (a sensory cue). Depending on the initial shape, they must determine either the color of the dots (red or green) or the direction of their motion (left or right).
The researchers analyzed the neural activity recorded during the task using the latent circuit model. They observed a central feature pattern: when motion was the relevant cue, the neurons that shaped were able to suppress the activity of the color-processing neurons. When color was the salient cue, the reverse phenomenon was observed—color-processing neurons inhibited those that were motion-related.
'It was very exciting to find an interpretable, concrete mechanism hiding inside a big network,' Langdon said.
The latent circuit model does not only capture relationships between neurons; it also makes predictions. The researchers showed that if specific neural connections in the model were weakened or removed, decision-making performance deteriorated in a certain logical fashion.
'The cool thing about our new work is that we showed how you can translate all those things that you can do with a circuit onto a big network,' Langdon said.
'When you build a small neural circuit by hand, there's lots of things you can do to convince yourself you understand it. You can play with connections and perturb nodes, and have some idea what should happen to behavior when you play with the circuit in this way.'
Disorders like depression, ADHD, and Alzheimer's disease often involve difficulties with decision-making. This research could one day inform better treatments for these conditions by revealing the underlying mathematical principles.
Apart from advancing our understanding of the human nervous system, this model could enhance artificial intelligence. Digital assistants like Alexa or even self-driving cars depend on decision-making algorithms that combine several sensory inputs. If AI systems were designed to adapt to and resolve conflicting information just as the human brain does, they would become far more reliable.
The next phase of research will involve applying the latent circuit model to other well-studied decision-making tasks. 'A lot of the tightly controlled decision-making tasks that experimentalists study, I believe that they likely have relatively simple latent mechanisms,' Langdon said.
'My hope is that we can start looking for these mechanisms now in those datasets," he concluded.
The study has been published in Nature Neuroscience.