logo
Humans think — AI, not so much. Science explains why our brains aren't just fancy computers

Humans think — AI, not so much. Science explains why our brains aren't just fancy computers

Yahoo25-04-2025

The world, and countless generations of interactions with it, coaxed our brains to evolve in the unique way that humans perceive reality. And yet, thanks to the past century's developments in cognitive science and now artificial intelligence, we have entrenched a view of the brain that doesn't spend much time on this dynamic. Instead, most of us tend to see our brains as a "network" made of undifferentiated brain cells. These neurons produce cognition by the patterns in which groups of them fire at once — a model that has inspired advanced computers and AI.
But accumulating discoveries of different specialized brain cells pose a challenge to models of human or artificial intelligence in which thoughts and concepts arise purely from the distributed firing of many essentially-identical brain cells in a network. Perhaps it's time to consider that if we want to replicate human intelligence, we ought to take a closer look at some of the amazing adaptations that have evolved in mammalian neurons — and specifically, neurons in the human brain. Instead of the popularly understood idea of the brain as a neural network of undifferentiated brain cells, research has increasingly found that different neurons, even of the same basic type, have their own specific functions and abilities.
In fact, in the modern, popular understanding of the brain, we really tend to think of this organ as a sophisticated version of the technology it inspired. Merriam-Webster defines neural network as "a computer architecture in which a number of processors are interconnected in a manner suggestive of the connections between neurons in a human brain and which is able to learn by a process of trial and error." This is a typical definition, in which the computer-brain analogy focuses on the distributed connections between neurons (or, in a computer, nodes) with no attention to what exactly those neurons are for.
It's a definition that has been good enough since the 1980s, when future Nobel Prize-winner Geoffrey Hinton and others picked up on an older idea called backpropagation, applying it as an algorithm that mimics human brains by systematically reducing errors through repeated iterations and thus allows for more efficient training of multilayer neural networks. This reinvigorated the earlier idea that a system of nodes and connections that mimics the human brain might work to create an artificial form of intelligence, leading to the deep learning models and machine learning we have today. Since the discipline of artificial intelligence latched onto the neural network, though, it's largely focused on developing different forms of artificial (or simulated) neural networks, and mostly moved away from studying the human or animal brain as an artifact of evolution with specifics worth mimicking.
But while it's true that that most neurons are important only for their firing or non-firing, not for their specific role, even as computer scientists have been expanding the things that artificial neural networks can do, research on the brain itself has continued. Over decades, and especially the past few years, individual pieces of research have gradually identified a host of different brain cell types, upending our simple image of the brain as merely a very powerful computer.
Instead, they reveal mammalian brains to be the product of millions of years of evolution and adaptation to environments. Over all those years, countless tiny changes have led animal brains to evolve a unique nervous system in which the key component, the neuron, is now able to represent our experiences and thoughts and surroundings in specific and wondrously clever ways not available to other animals who have not evolved our most recent adaptations. Our particular form of intelligence, it seems, depends on this minority of specialized neuron types.
Back in 2001, Yuri Arshavsky wrote, "I argue that individual neurons may play an important, if not decisive, role in performing cognitive functions of the brain." At that time the research was already accumulating, but the idea went counter to the prevailing view in neuroscience. By now, though, it's becoming hard to argue against Arshavsky's claim.
There are brain cells that represent entire concepts, some with an affinity for visual information and others for olfactory input. Scientists have also found neurons that can encode entire concepts with the firing of a single cell, or that are devoted to specific aspects of cognition and how we represent the world, and that fire when their particular function is needed: warm-sensitive neurons, place cells and related time cells, olfactory concept cells, visual concept cells, Lepr neurons that control metabolism... the list of discoveries is long and still growing.
New research looking into the already-brewing notion of time and space-encoding cells demonstrates how different cell types work together to give us both "what" and "where" information that allows our brains to represent our experience of time. Researchers still haven't decided how best to classify all the different types, but they are increasingly trying to map the specific kinds of input they encode through the patterns of which neurons fire when, and the relationships between the different representations this creates.
"I do agree that today's AI models have important deficiencies — and among them might be that they lack some of the predispositions various parts of our brains may have," Jay McClelland, a noted cognitive scientist at Stanford University, told Salon in an email.
AI is doing incredible (and destructive) things these days, solving impossible medical problems and generating imagery that manages the trick of being simultaneously trite and bizarre. The computing power this requires sucks water from a parched earth and puts entire creative industries out of work. AI models that act as "artificial brains" are able to do therapy, provide health care or write (in a manner of speaking.) But there are ways in which the large language models and similar generative AI are missing not simply the feeling of being human, but the actual function.
Most of our understanding of how the brain works at the single neuron level — equivalent to a node in an artificial neural network — comes from studies of murine (mouse or rat) or primate models, because it isn't considered ethical to do brain surgery on humans just to find out what interesting things are going on in our brains. So it's only with the recent development of a technique that allows for single neuron recordings to be taken during unavoidably necessary brain surgery done on epilepsy patients for diagnostic purposes that researchers gained subjects who would be available for perhaps a week at a time to look at things and talk about them while scientists recorded which neurons fired, how intensely, and for how long.
This is a very particular situation, but luckily there are many people with epilepsy of different kinds, and a subset of them need electrodes implanted in their brains to record their spontaneous seizures over the course of a week or two so as to figure out if they are a candidate for surgery to cure them. These implants are done in two parts of the brain that often produce seizures, the medial temporal lobe and the medial frontal lobe.The majority of brain cells are neurons, while some cells have other functions. But the exact number of neuron types is unknown, although recent research in human brains has identified at least two million neurons, which researchers were able to categorize into different types: "31 superclusters, 461 clusters, and 3313 subclusters," resulting in a massive number of individual types. It's remarkably different from the simple three neuron types classification — motor neurons, sensory neurons, and interneurons — one might have learned in a cursory overview of brain science.
Itzhak Fried, lead author on the newly published research on time and space cells, is a neurosurgeon at UCLA whose lab, and postdoctoral students trained there, spawned many of the major discoveries of these specialized neuron types. Fried told Salon about the two decades of research, or more, that have led to the profusion of concept cells and other neuron types we now understand to play critical roles in encoding and representing our experience with the world.
Not just with the world, but with our imaginations, and experiences that now live only in memory rather than being triggered by external stimuli. Fried cited the work of Hagar Gelbard-Sagiv, a postdoc in his lab, who, as described in a 2008 paper, found that when subjects were shown a variety of film clips while researchers recorded the activity of single neurons in their hippocampus and surrounding areas, a subset of those neurons fired in response to a particular concept — there was one neuron, for example, that began firing at the start of a clip from "The Simpsons" and continued firing despite the changing images on the screen. That is, it responded not to a specific image but to the general Simpsons concept — and not to any other videos that weren't Simpsons-related.
Even more remarkable was that when the movie-watchers were asked to tell the researchers what they'd seen, they would begin to describe the assortment of 20-odd movie clips they'd been shown, and that particular neuron would fire during the actual act of remembering the Simpsons video.
"After we presented, let's say, 20 videos ... we said to the patient, 'Just tell us what you say, okay?' She says, 'well, you know, I remember Martin Luther King's speech, and I saw a landing on the moon'. And suddenly the Simpsons neuron started firing and then a second later, [the patient] says 'The Simpsons'," Fried recalled for Salon. "It's as if there was some process going on [that] she didn't even realize yet, as there was already a signature of that memory. Obviously there was no sensory input. She was completely locked in her mind. And that concept neuron started firing, and the memory came out, essentially emerged at the conscious level."
In some ways, we do work like computers and use distributed networks of firing neurons in important ways. In fact, most parts of the brain work like that, Dr. Florian Mormann, a cognitive and clinical neurophysiologist at the University of Bonn who conducts single neuron recordings on epilepsy patients (and who was a postdoc in Fried's lab), told Salon in a video interview. "One control region we have in the visual pathway is the parahippocampal cortex, which indeed features a distributed network code, which is what most of the brain regions do."
And in the Simpsons neuron case, for example, it was just a subset of neurons in the medial temporal lobe that behaved with extreme specificity to enable patients to quickly grasp the relevant concept. Just a single neuron could determine the patient's memory that a video of, say, Itchy and Scratchy, or of Moe's bar, or of a three-eyed fish at the Springfield nuclear power plant, was a video about the Simpsons.
AI just doesn't work like that. Instead, it analyzes large amounts of data to detect patterns, and its algorithms rely on the statistical probability of a particular decision being the right one. Incorrectly chosen, biased or inadequately large data sets can result in the famous "hallucinations" to which AI models are prone.
"It comes to a fundamental issue about what sort of a system do we need to model intelligence," McClelland explained in a keynote talk, Fundamental Challenges for AI, that he delivered last April at the Computer History Museum in Palo Alto, CA.
Writing to Salon, he offered the example of place cells, the specialized neuron he's most familiar with.
"There are different views, but the role and nature of so-called place cells is extremely nuanced. These cells can code for all kinds of things in tasks that aren't simply requiring animals to move around in space," McClelland said.
McClelland pointed out that the differences between human brains and artificial intelligence systems include how we learn. Indeed, learning and the necessary process of memory formation and retrieval are key to the specialized roles played by concept cells and some of our other specialized neurons.
"I also think that our brains use far different learning algorithms than our current deep learning systems," McClelland said. "I'm taking inspiration from a fairly-recently discovered new form of learning called Behavioral Time Scale Synaptic plasticity [BTSP] to think about our brains might have come to be able to learn with far less training data than we currently need to train contemporary AI systems."
The pattern recognition that allows AI to learn is based on something called Hebbian-style synaptic plasticity, based on Donald Hebb's idea that learning arises through repeated use of the same connections between neurons in the brain: repeated activation strengthens the efficiency of cells firing together. The term "synaptic plasticity" just means the ability of these connections to be strengthened or otherwise changed.
"The prevailing theories of the 20th century and later all proposed that the primary mechanism of CA3 ensemble or attractor formation was Hebbian style synaptic plasticity, based on correlated AP [action potential, or neurons firing] activity," write the authors of a study published in Cell in November that explored the dynamics of neurons contributing to memory formation. Hebbian-style synaptic plasticity allows for creation of memories and learning from experience within a network of neurons and synapses.
This is the basic understanding that underlies deep learning models used in AI. But the authors of the Cell study propose that in human brains, what's actually going on is a different form of synaptic plasticity, BTSP, which allows for far fewer firings of neurons to create a memory — in fact, you might just need a single "event" to result in learning. Like another hypothesis for how neurons do their thing, the sparse coding hypothesis, BTSP works well because it doesn't need the kind of overlap that Hebbian-style plasticity requires.
Concepts in the human brain, as we've seen, can be encoded with just a small number of neurons firing, or even just one, Mormann explained: "So when I say sparse versus network, or sparse versus distributed, that means that [most] neurons are silent, and then just a few neurons suddenly say 'Look, this is my favorite stimulus.' It indicates that that stimulus is there."
A reason evolution might allow itself "the luxury of having these sparse representations" when network codes would be more efficient, Mormann suggested, is that "they actually provide the semantic building blocks that are being pieced together to form mnemonic episodes." That is, our episodic memories are pieced together from a small number of concepts embellished by the brain's tendency to make up less important details, or to remain fuzzy about them.
"The only things that are really reliable and can be reliably tested are a few core semantic facts, and those are the ones that we believe are represented or provided by concept neurons, and they are being pieced together to form episodic memories," Mormann said.
Although we have not yet created a complete picture of how humans represent experience, including through the apparently vital roles played by concept cells, place or grid cells, time cells and other specialized cell types, it's becoming clearer that neurons in animals have evolved unique adaptations. Researchers have, for example, identified thousands of specialized neurons in mice. But they help them do mouse things. In humans, culture, language, care, tools and other still-to-be demonstrated ways in which we interact with the world around us has produced specializations that let us encode entire concepts and think in an abstract way, internally representing our experiences.
So AI might do well to look back at how the world has shaped us, letting us do human things by the way our brains now make the world.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

North America takes the bulk of AI VC investments, despite tough political environment
North America takes the bulk of AI VC investments, despite tough political environment

TechCrunch

time10 hours ago

  • TechCrunch

North America takes the bulk of AI VC investments, despite tough political environment

Despite what some experts have characterized as an environment increasingly hostile to AI R&D, North America continues to receive the bulk of AI venture dollars, according to data from investment tracker PitchBook. Between February and May of this year, VCs poured $69.7 billion into North America-based AI and machine learning startups across 1,528 deals. That's compared with $6.4 billion that VC firms invested in European AI ventures across 742 deals across the same period. Asia-based startups have fared a bit worse than their European counterparts, according to PitchBook. Between February and May, VCs invested just $3 billion in Asia-based AI startups across 515 deals. Under President Donald Trump, the U.S. has dramatically cut funding to scientific grants related to basic AI research, made it more difficult for foreign students specializing in AI to study in the U.S., and threatened to dismantle university-housed AI labs by freezing billions of dollars in federal funds. The administration's trade policies, meanwhile, including its retaliatory tariffs, have led to a chaotic market unfavorable for risky new AI ventures. In a post on X in March, AI pioneer and Nobel Laureate Geoffrey Hinton called for billionaire Elon Musk, who until recently advised Trump's cost-cutting group, the Department of Government Efficiency, to be expelled from the British Royal Society 'because of the huge damage he is doing to scientific institutions in the U.S.' One might expect that Europe, which has pledged to become a global leader in AI, would attract more venture capital in light of Trump's controversial policies in the U.S., which have created uncertainty and confusion for founders, investors, and researchers alike. Moreover, the EU has committed hundreds of billions of euros to support the development of AI within its member countries and already has a number of successful, well-funded AI startups in its ranks (see Mistral, H, and Aleph Alpha, to name a few). But that anticipated shift in global investment hasn't come to pass. There isn't any sign of a mass VC exodus to the bloc, or of significant upticks in AI funding overseas — at least not yet. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW The same is true for China, which has spawned high-profile AI startups like DeepSeek and Butterfly Effect — the company behind the agentic platform Manus — but where VC activity in the country and the broader Asian region remains relatively austere. (Export controls impacting the ability of certain Asian countries to procure AI chips are almost certainly a factor.) In 2024, North American startups secured 75.6% of all VC AI funding — $106.24 billion. That share has only increased this year. So far in 2025, North American AI investments represent 86.2% ($79.74 billion) of all VC funding for AI globally. It paints a somewhat surprising picture. Even amid mounting political and regulatory headwinds under Trump's second term, the U.S. remains the undisputed center for AI capital, meaning investors, fatigued as they may be by the administration's unpredictability, are still counting on U.S. innovation to deliver the biggest returns, at least for now.

‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems
‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems

Forbes

time14 hours ago

  • Forbes

‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems

Yoshua Bengio testifies on the importance of AI regulation at a U.S. Senate Judiciary Committee ... More hearing in July 2023. Computer scientist Yoshua Bengio, often referred to as the 'godfather' of AI, has launched a nonprofit aimed at creating AI systems that prioritize safety over business priorities. The organization, called LawZero, 'was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviors, including deception, self-preservation and goal misalignment,' reads a statement posted to its website on Tuesday. 'LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse and loss of human control.' LawZero is structured as a nonprofit 'to ensure it is insulated from market and government pressures, which risk compromising AI safety,' the statement says. Bengio is a worldwide leader in AI and a co-recipient of the 2018 A.M. Turing Award, the Association for Computing Machinery's prestigious annual prize that's sometimes called the Nobel Prize of Computing. He won the award alongside two other deep-learning pioneers — Geoffrey Hinton, another 'godfather of AI' who worked at Google, and Yann LeCun — for conceptual and engineering breakthroughs, made over decades, that have positioned deep neural networks as a critical component of computing. While artificial intelligence has sparked considerable excitement across industries — and Bengio recognizes its potential as a driver of significant innovation — it's also led to mounting concerns about possible pitfalls. Generative AI tools are capable of producing text, images and video that spread almost instantly over social media and can be difficult to distinguish from the real thing. Bengio has called for slowing the development of AI systems to better understand and regulate them. 'There is no guarantee that someone in the foreseeable future won't develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,' the University of Montreal professor wrote in a blog post announcing why he'd signed a 2023 open letter calling for a slowdown in the development of some AI tools. He has signed other such statements, and appeared in front of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law to outline the risks of AI misuse. Bengio also serves as scientific director at Mila (Montreal Institute for Learning Algorithms), an artificial-intelligence research institute. Now, he'll add LawZero president and scientific director to his resume. LawZero says it's assembling a team of world-class AI researchers, though it did not immediately respond to a request for comment on who is included in that group. Together, the scientists are working on a system called Scientist AI, which LawZero calls a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. 'Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,' LawZero says. 'LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.'

White House Budget Plan Would Devastate U.S. Space Science
White House Budget Plan Would Devastate U.S. Space Science

Scientific American

time16 hours ago

  • Scientific American

White House Budget Plan Would Devastate U.S. Space Science

Late last week the Trump Administration released its detailed budget request for fiscal year 2026 —a request that, if enacted, would be the equivalent of carpet-bombing the national scientific enterprise. 'This is a profound, generational threat to scientific leadership in the United States,' says Casey Dreier, chief of space policy at the Planetary Society, a science advocacy group. 'If implemented, it would fundamentally undermine and potentially devastate the most unique capabilities that the U.S. has built up over a half-century.' The Trump administration's proposal, which still needs to be approved by Congress, is sure to ignite fierce resistance from scientists and senators alike. Among other agencies, the budget deals staggering blows to NASA and the National Science Foundation (NSF), which together fund the majority of U.S. research in astronomy, astrophysics, planetary science, heliophysics and Earth science —all space-related sciences that have typically mustered hearty bipartisan support. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The NSF supports ground-based astronomy, including such facilities as the Nobel Prize–winning gravitational-wave detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO), globe-spanning arrays of radio telescopes, and cutting-edge observatories that stretch from Hawaii to the South Pole. The agency faces a lethal 57 percent reduction to its $9-billion budget, with deep cuts to every program except those in President Trump's priority areas, which include artificial intelligence and quantum information science. NASA, which funds space-based observatories, faces a 25 percent reduction, dropping the agency's $24.9-billion budget to $18.8 billion. The proposal beefs up efforts to send humans to the moon and to Mars, but the agency's Science Mission Directorate —home to Mars rovers, the Voyager interstellar probes, the James Webb Space Telescope (JWST), the Hubble Space Telescope, and much more —is looking at a nearly 50 percent reduction, with dozens of missions canceled, turned off or operating on a starvation diet. 'It's an end-game scenario for science at NASA,' says Joel Parriott, director of external affairs and public policy at the American Astronomical Society. 'It's not just the facilities. You're punching a generation-size hole, maybe a multigenerational hole, in the scientific and technical workforce. You don't just Cryovac these people and pull them out when the money comes back. People are going to move on.' Adding to the chaos, on Saturday President Trump announced that billionaire entrepreneur and private astronaut Jared Isaacman was no longer his pick for NASA administrator —just days before the Senate was set to confirm Isaacman's nomination. Initial reports —which have now been disputed —explained the president's decision as stemming from his discovery that Isaacman recently donated money to Democratic candidates. Regardless of the true reason, the decision leaves both NASA and the NSF, whose director abruptly resigned in April, with respective placeholder 'acting' leaders at the top. That leadership vacuum significantly weakens the agencies' ability to fight the proposed budget cuts and advocate for themselves. 'What's more inefficient than a rudderless agency without an empowered leadership?' Dreier asks. Actions versus Words During his second administration, President Trump has repeatedly celebrated U.S. leadership in space. When he nominated Isaacman last December, Trump noted 'NASA's mission of discovery and inspiration' and looked to a future of 'groundbreaking achievements in space science, technology and exploration.' More recently, while celebrating Hubble's 35th anniversary in April, Trump called the telescope 'a symbol of America's unmatched exploratory might' and declared that NASA would 'continue to lead the way in fueling the pursuit of space discovery and exploration.' The administration's budgetary actions speak louder than Trump's words, however. Instead of ushering in a new golden age of space exploration—or even setting up the U.S. to stay atop the podium—the president's budget 'narrows down what the cosmos is to moon and Mars and pretty much nothing else,' Dreier says. 'And the cosmos is a lot bigger, and there's a lot more to learn out there.' Dreier notes that when corrected for inflation, the overall NASA budget would be the lowest it's been since 1961. But in April of that year, the Soviet Union launched the first human into orbit, igniting a space race that swelled NASA's budget and led to the Apollo program putting American astronauts on the moon. Today China's rapid progress and enormous ambitions in space would make the moment ripe for a 21st-century version of this competition, with the U.S. generously funding its own efforts to maintain pole position. Instead the White House's budget would do the exact opposite. 'The seesaw is sort of unbalanced,' says Tony Beasley, director of the NSF-funded National Radio Astronomy Observatory (NRAO). 'On the one side, we're saying, 'Well, China's kicking our ass, and we need to do something about that.' But then we're not going to give any money to anything that might actually do that.' How NASA will achieve a crewed return to the moon and send astronauts to Mars—goals that the agency now considers part of 'winning the second space race'—while also maintaining its leadership in science is unclear. 'This is Russ Vought's budget,' Dreier says, referring to the director of the White House's Office of Management and Budget (OMB), an unelected bureaucrat who has been notorious for his efforts to reshape the U.S. government by weaponizing federal funding. 'This isn't even Trump's budget. Trump's budget would be good for space. This one undermines the president's own claims and ambitions when it comes to space.' 'Low Expectations' at the High Frontier Rumors began swirling about the demise of NASA science in April, when a leaked OMB document described some of the proposed cuts and cancellations. Those included both the beleaguered, bloated Mars Sample Return (MSR) program and the on-time, on-budget Nancy Grace Roman Space Telescope, the next astrophysics flagship mission. The top-line numbers in the more fleshed-out proposal are consistent with that document, and MSR would still be canceled. But Roman would be granted a stay of execution: rather than being zeroed out, it would be put on life support. 'It's a reprieve from outright termination, but it's still a cut for functionally no reason,' Dreier says. 'In some ways, [the budget] is slightly better than I was expecting. But I had very low expectations.' In the proposal, many of the deepest cuts would be made to NASA science, which would sink from $7.3 billion to $3.9 billion. Earth science missions focused on carbon monitoring and climate change, as well as programs aimed at education and workforce diversity, would be effectively erased by the cuts. But a slew of high-profile planetary science projects would suffer, too, with cancellations proposed for two future Venus missions, the Juno mission that is currently surveilling Jupiter, the New Horizons mission that flew by Pluto and two Mars orbiters. (The Dragonfly mission to Saturn's moon Titan would survive, as would the flagship Europa Clipper spacecraft, which launched last October.) NASA's international partnerships in planetary science fare poorly, too, as the budget rescinds the agency's involvement with multiple European-led projects, including a Venus mission and Mars rover. The proposal is even worse for NASA astrophysics—the study of our cosmic home—which 'really takes it to the chin,' Dreier says, with a roughly $1-billion drop to just $523 million. In the president's proposal, only three big astrophysics missions would survive: the soon-to-launch Roman and the already-operational Hubble and JWST. The rest of NASA's active astrophysics missions, which include the Chandra X-ray Observatory, the Fermi Gamma-Ray Space Telescope and the Transiting Exoplanet Survey Satellite (TESS), would be severely pared back or zeroed out. Additionally, the budget would nix NASA's contributions to large European missions, such as a future space-based gravitational-wave observatory. 'This is the most powerful fleet of missions in the history of the study of astrophysics from space,' says John O'Meara, chief scientist at the W. M. Keck Observatory in Hawaii and co-chair of a recent senior review panel that evaluated NASA's astrophysics missions. The report found that each reviewed mission 'continues to be capable of producing important, impactful science.' This fleet, O'Meara adds, is more than the sum of its parts, with much of its power emerging from synergies among multiple telescopes that study the cosmos in many different types, or wavelengths, of light. By hollowing out NASA's science to ruthlessly focus on crewed missions, the White House budget might be charitably viewed as seeking to rekindle a heroic age of spaceflight—with China's burgeoning space program as the new archrival. But even for these supposedly high-priority initiatives, the proposed funding levels appear too anemic and meager to give the U.S. any competitive edge. For example, the budget directs about $1 billion to new technology investments to support crewed Mars missions while conservative estimates have projected that such voyages would cost hundreds of billions of dollars more. 'It cedes U.S. leadership in space science at a time when other nations, particularly China, are increasing their ambitions,' Dreier says. 'It completely flies in the face of the president's own stated goals for American leadership in space.' Undermining the Foundation The NSF's situation , which one senior space scientist predicted would be 'diabolical' when the NASA numbers leaked back in April, is also unsurprisingly dire. Unlike NASA, which is focused on space science and exploration, the NSF's programs span the sweep of scientific disciplines, meaning that even small, isolated cuts—let alone the enormous ones that the budget has proposed—can have shockingly large effects on certain research domains. 'Across the different parts of the NSF, the programs that are upvoted are the president's strategic initiatives, but then everything else gets hit,' Beasley says. Several large-scale NSF-funded projects would escape more or less intact. Among these are the panoramic Vera C. Rubin Observatory, scheduled to unveil its first science images later this month, and the Atacama Large Millimeter/submillimeter Array (ALMA) radio telescope. The budget also moves the Giant Magellan Telescope, which would boast starlight-gathering mirrors totaling more than 25 meters across, into a final design phase. All three of those facilities take advantage of Chile's pristine dark skies. Other large NSF-funded projects that would survive include the proposed Next Generation Very Large Array of radio telescopes in New Mexico and several facilities at the South Pole, such as the IceCube Neutrino Observatory. If this budget is enacted, however, NSF officials anticipate only funding a measly 7 percent of research proposals overall rather than 25 percent; the number of graduate research fellowships awarded would be cleaved in half, and postdoctoral fellowships in the physical sciences would drop to zero. NRAO's Green Bank Observatory — home to the largest steerable single-dish radio telescope on the planet — would likely shut down. So would other, smaller observatories in Arizona and Chile. The Thirty Meter Telescope, a humongous, perennially embattled project with no clear site selection, would be canceled. And the budget proposes closing one of the two gravitational-wave detectors used by the LIGO collaboration—whose observations of colliding black holes earned the 2017 Nobel Prize in Physics—even though both detectors need to be online for LIGO's experiment to work. Even factoring in other operational detectors, such as Virgo in Europe and the Kamioka Gravitational Wave Detector (KAGRA) in Japan, shutting down half of LIGO would leave a gaping blind spot in humanity's gravitational-wave view of the heavens. 'The consequences of this budget are that key scientific priorities, on the ground and in space, will take at least a decade longer—or not be realized at all,' O'Meara says. 'The universe is telling its story at all wavelengths. It doesn't care what you build, but if you want to hear that story, you must build many things.' Dreier, Parriott and others are anticipating fierce battles on Capitol Hill. And already both Democratic and Republican legislators have issued statement signaling that they won't support the budget request as is. 'This sick joke of a budget is a nonstarter,' said Representative Zoe Lofgren of California, ranking member of the House Committee on Science, Space, and Technology, in a recent statement. And in an earlier statement, Senator Susan Collins of Maine, chair of the powerful Senate Committee on Appropriations, cautioned that 'the President's Budget Request is simply one step in the annual budget process.' The Trump administration has 'thrown a huge punch here, and there will be a certain back-reaction, and we'll end up in the middle somewhere,' Beasley says. 'The mistake you can make right now is to assume that this represents finalized decisions and the future—because it doesn't.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store