Richard Garwin, designer of the first H-bomb who also paved the way for MRI, GPS and touch-screens
Richard Garwin, who has died aged 97, was an American nuclear scientist who designed the world's first hydrogen bomb and went on to become a presidential adviser on arms control, while helping to lay the groundwork for such technology as magnetic resonance imaging, high-speed laser printers and touch-screen monitors. The Nobel prizewinner Enrico Fermi called him 'the only true genius I have ever met', but he never became a household name: a 2017 biography was subtitled 'The Most Influential Scientist You've Never Heard Of'.
Edward Teller is usually credited, in an unattributed phrase, as the 'father of the sweet technology of the H-bomb'. Due to the secrecy surrounding its development, it was only in recent years that historians have become aware of Garwin's role, following the publication in 2001 of a transcript of a recording made by Teller in which, while not eschewing the credit for devising the bomb, the scientist recalled that the 'first design was made by Dick Garwin'.
In 1951 Garwin, then a 23-year-old faculty member at the University of Chicago, was working during his summer holidays at the Los Alamos National Laboratory in New Mexico where, building on Teller's ideas, he designed the 'Mike', an 82-ton sausage-shaped test device, after working out how to direct the radiation from the atomic device to initiate a fusion reaction in the hydrogen – what he called 'the match for the nuclear bonfire'.
'The shot was fired almost precisely according to Garwin's design,' Teller recalled, on Enewetak Atoll on November 1 1952. The power of the blast – 450 times that of Nagasaki – stunned even those who had watched previous bomb tests, with a mushroom cloud five times the height of Everest and 100 miles wide.
Teller subsequently became famous for destroying the career of Robert Oppenheimer, who had run the Los Alamos lab in the Second World War, giving birth to the atomic bomb, but afterwards questioned the morality of devising an even more powerful weapon. When, amid the anti-communist paranoia of the McCarthy years, Oppenheimer had his security clearance removed by the government, Teller was the only member of the scientific community to testify against him.
In fact Garwin, a board member of the Union of Concerned Scientists, had a lot of sympathy with Oppenheimer, telling an interviewer that if he could wave a magic wand to make the H- bomb go away, 'I would do that.' But as the clock could not be wound back, he believed that the best hope for human survival lay in the deterrence doctrine of Mutual Assured Destruction (MAD) that suggests that a nuclear attack by one superpower would result in a retaliatory nuclear strike, leading to the complete destruction of both attacker and defender. 'The capability for MAD,' Garwin said 'is not a theory, but a fact of life'.
In the 1980s, when Teller convinced President Ronald Reagan to invest in a defensive shield that, he claimed, would make it probable that enough Americans would survive a nuclear conflict to ensure the US's continued existence, Garwin was vocal in his criticism of the so-called 'Star Wars' initiative as ineffective and wasteful. He saw a Soviet-American balance of weaponry and arms-control measures as the best way of avoiding nuclear Armageddon.
Richard Lawrence Garwin was born in Cleveland, Ohio, on April 19 1928, the older of two sons of Robert Garwin and Leona, née Schwartz. His father was a high school teacher; his mother a legal secretary.
From Cleveland Heights High School Garwin graduated in physics in 1947 from what is now Case Western Reserve University, followed by a master's degree and doctorate under Enrico Fermi at the University of Chicago. He joined the faculty, but at Fermi's suggestion spent his summers at the Los Alamos lab, where he returned every year until 1966.
For 40 years from the early 1950s Garwin was a researcher at IBM, maintaining a faculty position at Columbia University and advising presidents (excepting Reagan) from Eisenhower to Clinton on nuclear weapons and arms-control issues.
As a researcher he contributed to a huge range of scientific discoveries and innovations, and in 2016, when he was presented with the Presidential Medal of Freedom by Barack Obama, the president recalled: 'Ever since he was a Cleveland kid tinkering with his father's movie projectors, he's never met a problem he didn't want to solve. Reconnaissance satellites, the MRI, GPS technology, the touch-screen – all bear his fingerprints. He even patented a mussel washer for shellfish: that, I haven't used. The other stuff I have.'
In 1991 Garwin chaired a conference to discuss solutions to staunching the Kuwaiti oil leaks during the first Gulf War. He advised the Obama government on dealing with the Deepwater Horizon oil spill in the Gulf of Mexico and the Fukushima nuclear disaster in Japan. From 1993 to 2001 he chaired the State Department's Arms Control and Nonproliferation Advisory Board.
His belief in the vital importance of nuclear balance led him to oppose any policy that might upset that balance. In 2007, in evidence to the British Commons Defence Select Committee, he described Prime Minister Tony Blair's claim that work must start soon on replacing the ageing Vanguard-class subs of Britain's nuclear submarine fleet as 'highly premature''. The subs' working life could be extended to 45 years or more, he argued, putting off the need for a replacement into the late 2030s or beyond.
In 2021 he was one of 700 signatories to an open letter to President Biden, asking him to pledge that the US would never be the first to use nuclear weapons in a conflict and calling for curbs on his role as sole authority in ordering the use of nuclear weapons – as 'an important safeguard against a possible future president who is unstable or who orders a reckless attack'.
The plea fell on deaf ears.
In 1947 Richard Garwin married Lois Levy. She died in 2018, and he is survived by two sons and a daughter.
Richard Garwin, born April 19 1928, died May 13 2025
Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
8 hours ago
- Yahoo
A New Study Reveals There's A Specific Diet Linked To A Significantly Lower Dementia Risk
A new study suggests that a specific diet could lower your dementia risk. Researchers analyzing data from nearly 93,000 American adults found that those who closely followed the MIND diet had a 9 percent lower risk of developing dementia. The MIND diet stands for Mediterranean–DASH Intervention for Neurodegenerative Delay. If you've been paying attention to health and wellness headlines over the past few years, you'll already know that your diet impacts everything from gut health to energy levels, but new research suggests it may even influence your dementia risk. That's the major takeaway from the study, which was presented at the American Society for Nutrition's annual meeting. The study pinpoints a specific eating plan—the MIND diet—as having a meaningful impact on dementia risk. Of course, following a specific diet won't automatically wipe away any risk of developing dementia, but neurologists say these new findings are worth paying attention to. Here's why that is, and what they make of the results. Meet the experts: Clifford Segil, DO, is a neurologist at Providence Saint John's Health Center in Santa Monica, CA; Amit Sachdev, MD, MS, is the medical director in the Department of Neurology at Michigan State University For the study, researchers analyzed data from nearly 93,000 American adults who participated in the Multiethnic Cohort Study, a long-term study that started in the '90s. At the start of the study, the participants were between 45 and 75 years old. During the study period, more than 21,000 developed Alzheimer's disease or related dementias. The researchers discovered that study participants who closely followed the MIND diet had a 9 percent lower risk of developing dementia. There was a difference in race, too: People who were African American, Latino, or White had a 13 percent lower risk. But the researchers also discovered that people who followed the MIND diet more closely over 10 years, even if they weren't super consistent at the start of the study, had a 25 percent lower risk of dementia compared to those who stopped adhering as much to the eating plan over time. The MIND diet stands for Mediterranean–DASH Intervention for Neurodegenerative Delay. It's an iteration of the Mediterranean diet that is focused on eating plant-based foods with the goal of preventing dementia, according to the National Institute on Aging (NIA). The MIND diet encourages people to focus on eating green leafy vegetables, berries, whole grains, beans, and nuts. It also encourages followers to have olive oil and one or more weekly servings of fish, while limiting red meat, sweets, cheese, butter and margarine, and fast and fried food, per the NIA. The study didn't find that following a MIND diet caused a drop in dementia risk—it just found a link between people who followed the diet and lower risk. But neurologists say there could be something behind this link. 'The MIND diet is generally a balanced diet that manages portion control and offers a diversity of food choices,' says Amit Sachdev, MD, MS, medical director in the Department of Neurology at Michigan State University. 'This approach offers benefits for improving cardiovascular and cerebrovascular health. By improving blood flow to the brain, overall brain health can be improved.' Clifford Segil, DO, a neurologist at Providence Saint John's Health Center in Santa Monica, CA, agrees. 'Eating healthy prevents cardiovascular and cerebrovascular disease which, in turn, prevents heart attacks and strokes,' he says. That question is still under investigation, Dr. Segil says. 'Many diets have been proposed to decrease your risk of getting Alzheimer's dementia as we age and it still remains challenging to see any diet is 'neuro-protective' against getting Alzheimer's dementia,' he adds. Still, Dr. Segil suggests that it may be best to avoid diets that are high in fats and sugars, and limit simple carbohydrates and ultra-processed foods. (All of these food factors have been linked to bodily inflammation, which is associated with dementia risk.) 'Eating a balanced diet of proteins, vegetables, and vitamins, and drinking water are healthy choices which can make you more healthy—and maybe your brain more healthy,' he says. You Might Also Like Jennifer Garner Swears By This Retinol Eye Cream These New Kicks Will Help You Smash Your Cross-Training Goals


The Intercept
10 hours ago
- The Intercept
OpenAI's Pitch to Trump: Rank the World on U.S. Tech Interests
OpenAI has always said it's a different kind of Big Tech titan, founded not just to rack up a stratospheric valuation of $400 billion (and counting), but also to 'ensure that artificial general intelligence benefits all of humanity.' The meteoric machine-learning firm announced itself to the world in a December 2015 press release that lays out a vision of technology to benefit all people as people, not citizens. There are neither good guys nor adversaries. 'Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,' the announcement stated with confidence. 'Since our research is free from financial obligations, we can better focus on a positive human impact.' Early rhetoric from the company and its CEO, Sam Altman, described advanced artificial intelligence as a harbinger of a globalist utopia, a technology that wouldn't be walled off by national or corporate boundaries but enjoyed together by the species that birthed it. In an early interview with Altman and fellow OpenAI co-founder Elon Musk, Altman described a vision of artificial intelligence 'freely owned by the world' in common. When Vanity Fair asked in a 2015 interview why the company hadn't set out as a for-profit venture, Altman replied: 'I think that the misaligned incentives there would be suboptimal to the world as a whole.' Times have changed. And OpenAI wants the White House to think it has too. In a March 13 white paper submitted directly to the Trump administration, OpenAI's global affairs chief Chris Lehane pitched a near future of AI built for the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China. The policy paper's mentions of freedom abound, but the proposal's true byword is national security. OpenAI never attempts to reconcile its full-throated support of American security with its claims to work for the whole planet, not a single country. After opening with a quotation from Trump's own executive order on AI, the action plan proposes that the government create a direct line for the AI industry to reach the entire national security community, work with OpenAI 'to develop custom models for national security,' and increase intelligence sharing between industry and spy agencies 'to mitigate national security risks,' namely from China. In the place of techno-globalism, OpenAI outlines a Cold Warrior exhortation to divide the world into camps. OpenAI will ally with those 'countries who prefer to build AI on democratic rails,' and get them to commit to 'deploy AI in line with democratic principles set out by the US government.' The rhetoric seems pulled directly from the keyboard of an 'America First' foreign policy hawk like Marco Rubio or Rep. Mike Gallagher, not a company whose website still endorses the goal of lifting up the whole world. The word 'humanity,' in fact, never appears in the action plan. Rather, the plan asks Trump, to whom Altman donated $1 million for his inauguration ceremony, to 'ensure that American-led AI prevails over CCP-led AI' — the Chinese Communist Party — 'securing both American leadership on AI and a brighter future for all Americans.' It's an inherently nationalist pitch: The concepts of 'democratic values' and 'democratic infrastructure' are both left largely undefined beyond their American-ness. What is democratic AI? American AI. What is American AI? The AI of freedom. And regulation of any kind, of course, 'may hinder our economic competitiveness and undermine our national security,' Lehane writes, suggesting a total merging of corporate and national interests. In an emailed statement, OpenAI spokesperson Liz Bourgeois declined to explain the company's nationalist pivot but defended its national security work. 'We believe working closely with the U.S. government is critical to advancing our mission of ensuring AGI benefits all of humanity,' Bourgeois wrote. 'The U.S. is uniquely positioned to help shape global norms around safe, secure, and broadly beneficial AI development—rooted in democratic values and international collaboration.' The Intercept is currently suing OpenAI in federal court over the company's use of copyrighted articles to train its chatbot ChatGPT. OpenAI's newfound patriotism is loud. But is it real? In his 2015 interview with Musk, Altman spoke of artificial intelligence as a technology so special and so powerful that it ought to transcend national considerations. Pressed on OpenAI's goal to share artificial intelligence technology globally rather than keeping it under domestic control, Altman provided an answer far more ambivalent than the company's current day mega-patriotism: 'If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?' He also said, in the early days of OpenAI, that there may be limits to what his company might do for his country. 'I unabashedly love this country, which is the greatest country in the world,' Altman told the New Yorker in 2016. 'But some things we will never do with the Department of Defense.' In the profile, he expressed ambivalence about overtures to OpenAI from then-Secretary of Defense Ashton Carter, who envisioned using the company's tools for targeting purposes. At the time, this would have run afoul of the company's own ethical guidelines, which for years stated explicitly that customers could not use its services for 'military and warfare' purposes, writing off any Pentagon contracting entirely. In January 2024, The Intercept reported that OpenAI had deleted this military contracting ban from its policies without explanation or announcement. Asked about how the policy reversal might affect business with other countries in an interview with Bloomberg, OpenAI executive Anna Makanju said the company is 'focused on United States national security agencies.' But insiders who spoke with The Intercept on conditions of anonymity suggested that the company's turn to jingoism may come more from opportunism than patriotism. Though Altman has long been on the record as endorsing corporate support of the United States, under an administration where the personal favor of the president means far more than the will of lawmakers, parroting muscular foreign policy rhetoric is good for business. One OpenAI source who spoke with The Intercept recalled concerned discussions about the possibility that the U.S. government would nationalize the company. They said that at times, this was discussed with the company's head of national security partnerships, Katrina Mulligan. Mulligan joined the company in February 2024 after a career in the U.S. intelligence and military establishment, including leading the media and public policy response to Edward Snowden's leaks while on the Obama National Security Council staff, working for the director of national intelligence, serving as a senior civilian overseeing Special Operations forces in the Pentagon, and working as chief of staff to the secretary of the Army. This source speculated that fostering closeness with the government was one method of fending off the potential risk of nationalization. As an independent research organization with ostensibly noble, global goals, OpenAI may have been less equipped to beat back regulatory intervention, a second former OpenAI employee suggested. What we see now, they said, is the company 'transitioning from presenting themselves as a nonprofit with very altruistic, pro-humanity aims, to presenting themselves as an economic and military powerhouse that the government needs to support, shelter, and cut red tape on behalf of.' The second source said they believed the national security rhetoric was indicative of OpenAI 'sucking up to the administration,' not a genuinely held commitment by executives. 'In terms of how decisions were actually made, what seemed to be the deciding factor was basically how can OpenAI win the race rather than anything to do with either humanity or national security,' they added. 'In today's political environment, it's a winning move with the administration to talk about America winning and national security and stuff like that. But you should not confuse that for the actual thing that's driving decision-making internally.' The person said that talk of preventing Chinese dominance over artificial intelligence likely reflects business, not political, anxieties. 'I think that's not their goal,' they said. 'I think their goal is to maintain their own control over the most powerful stuff.' 'I also talked to some people who work at OpenAI who weren't from the U.S. who were feeling like … 'What's going to happen to my country?'' But even if its motivations are cynical, company sources told The Intercept that national security considerations still pervaded OpenAI. The first source recalled a member of OpenAI's corporate security team regularly engaging with the U.S. intelligence community to safeguard the company's ultra-valuable machine-learning models. The second recalled concern about the extent of the government's relationship — and potential control over — OpenAI's technology. A common fear among AI safety researchers is a future scenario in which artificial intelligence models begin autonomously designing newer versions, ad infinitum, leading human engineers to lose control. 'One reason why the military AI angle could be bad for safety is that you end up getting the same sort of thing with AIs designing successors designing successors, except that it's happening in a military black project instead of in a somewhat more transparent corporation,' the second source said. 'Occasionally there'd be talk of, like, eventually the government will wake up, and there'll be a nuclear power plant next to a data center next to a bunker, and we'll all be moved into the bunker so that we can, like, beat China by managing an intelligence explosion,' they added. At a company that recruits top engineering talent internationally, the prospect of American dominance of a technology they believe could be cataclysmic was at times disquieting. 'I remember I also talked to some people who work at OpenAI who weren't from the U.S. who were feeling kind of sad about that and being like, 'What's going to happen to my country after the U.S. gets all the super intelligences?'' Sincerity aside, OpenAI has spent the past year training its corporate algorithm on flag-waving, defense lobbying, and a strident anticommunism that smacks more of the John Birch Society than the Whole Earth Catalog. In his white paper, Lehane, a former press secretary for Vice President Al Gore and special counsel to President Bill Clinton, advocates not for a globalist techno-utopia in which artificial intelligence jointly benefits the world, but a benevolent jingoism in which freedom and prosperity is underwritten by the guarantee of American dominance. While the document notes fleetingly, in its very last line, the idea of 'work toward AI that benefits everyone,' the pitch is not one of true global benefit, but of American prosperity that trickles down to its allies. The company proposes strict rules walling off parts of the world, namely China, from AI's benefits, on the grounds that they are simply too dangerous to be trusted. OpenAI explicitly advocates for conceiving of the AI market not as an international one, but 'the entire world less the PRC' — the People's Republic of China — 'and its few allies,' a line that quietly excludes over 1 billion people from the humanity the company says it wishes to benefit and millions who live under U.S.-allied authoritarian rule. In pursuit of 'democratic values,' OpenAI proposes dividing the entire planet into three tiers. At the top: 'Countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.' Given the earlier mention of building 'AI in line with democratic principles set out by the US government,' this group's membership is clear: the United States, and its friends. In pursuit of 'democratic values,' OpenAI proposes dividing the entire planet into three tiers. Beneath them are Tier 2 countries, a geopolitical purgatory defined only as those that have failed to sufficiently enforce American export control policies and protect American intellectual property from Tier 3: Communist China. 'CCP-led China, along with a small cohort of countries aligned with the CCP, would represent its own category that is prohibited from accessing democratic AI systems,' the paper explains. To keep these barriers intact — while allowing for the chance that Tier 2 countries might someday graduate to the top — OpenAI suggests coordinating 'global bans on CCP-aligned AI' and 'prohibiting relationships' between other countries and China's military or intelligence services. One of the former OpenAI employees said concern about China at times circulated throughout the company. 'Definitely concerns about espionage came up,' this source said, 'including 'Are particular people who work at the company spies or agents?'' At one point, they said, a colleague worried about a specific co-worker they'd learned was the child of a Chinese government official. The sourced recalled 'some people being very upset about the implication' that the company had been infiltrated by foreigners, while others wanted an actual answer: ''Is anyone who works at the company a spy or foreign agent?'' The company's public adoration of Western democracy is not without wrinkles. In early May, OpenAI announced an initiative to build data centers and customized ChatGPT bots with foreign governments, as part of its $500 billion 'Project Stargate' AI infrastructure construction blitz. 'This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power,' the announcement read. Unmentioned in that celebration of AI democracy is the fact that Project Stargate's financial backers include the government of Abu Dhabi, an absolute monarchy. On May 23, Altman tweeted that it was 'great to work with the UAE' on Stargate, describing co-investor and Emirati national security adviser Tahnoun bin Zayed Al Nahyan as a 'great supporter of openai, a true believer in AGI, and a dear personal friend.' In 2019, Reuters revealed how a team of mercenary hackers working for Emirati intelligence under Tahnoun had illegally broken into the devices of targets around the world, including American citizens. Asked how a close partnership with an authoritarian Emirati autocracy fit into its broader mission of spreading democratic values, OpenAI pointed to a recent op-ed in The Hill in which Lehane discusses the partnership. 'We're working closely with American officials to ensure our international partnerships meet the highest standards of security and compliance,' Lehane writes, adding, 'Authoritarian regimes would be excluded.' OpenAI's new direction has been reflected in its hiring. Since hiring Mulligan, the company has continued to expand its D.C. operation. Mulligan works on national security policy with a team of former Department of Defense, NSA, CIA, and Special Operations personnel. Gabrielle Tarini joined the company after almost two years at the Defense Department, where she worked on 'Indo-Pacific security affairs' and 'China policy,' according to LinkedIn. Sasha Baker, who runs national security policy, joined after years at the National Security Council and Pentagon. OpenAI's policy team includes former DoD, NSA, CIA, and Special Operations personnel. The list goes on: Other policy team hires at OpenAI include veterans of the NSA, a Pentagon former special operations and South China Sea expert, and a graduate of the CIA's Sherman Kent School for Intelligence Analysis. OpenAI's military and intelligence revolving door continues to turn: At the end of April, the company recruited Alexis Bonnell, the former chief information officer of the Air Force Research Laboratory. Recent job openings have included a 'Relationship Manager' focusing on 'strategic relationships with U.S. government customers.' Mulligan, the head of national security policy and partnerships, is both deeply connected to the defense and intelligence apparatus, and adept at the kind of ethically ambivalent thinking common to the tech sector. 'Not everything that has happened at Guantanamo Bay is to be praised, that's for sure, but [Khalid Sheikh Mohammed] admitting to his crimes, even all these years later, is a big moment for many (including me),' she posted last year. In a March podcast appearance, Mulligan noted she worked on 'Gitmo rendition, detention, and interrogation' during her time in government. Mulligan's public rhetoric matches the ideological drift of a company that today seems more concerned with 'competition' and 'adversaries' than kumbaya globalism. On LinkedIn, she seems to embody the contradiction between a global mission and full-throated alignment with American policy values. 'I'm excited to be joining OpenAI to help them ensure that AI is safe and beneficial to all of humanity,' she wrote upon her hiring from the Pentagon. Since then, she has regularly represented OpenAI's interests and American interests as one and the same, sharing national security truisms such as 'In a competition with China, the pace of AI adoption matters,' or 'The United States' continued lead on AI is essential to our national security and economic competitiveness,' or 'Congress needs to make some decisive investments to ensure the U.S. national security community has the resources to harness the advantage the U.S. has on this technology.' This is to some extent conventional wisdom of the country's past 100 years: A strong, powerful America is good for the whole world. But OpenAI has shifted from an organization that believed its tech would lift up the whole world, unbounded by national borders, to one that talks like Lockheed Martin. Part of OpenAI's national security realignment has come in the form of occasional 'disruption' reports detailing how the company detected and neutralized 'malicious use' of its tools by foreign governments, coincidentally almost all of them considered adversaries of the United States. As the provider of services like ChatGPT, OpenAI has near-total visibility into how the tools are used or misused by individuals, what the company describes in one report as its 'unique vantage point.' The reports detail not only how these governments attempted to use ChatGPT, but also the steps OpenAI took to thwart them, described by the company as an 'effort to support broader efforts by U.S. and allied governments.' Each report has focused almost entirely on malign AI uses by 'state affiliated' actors from Iran, China, North Korea, and Russia. A May 2024 report outed an Israeli propaganda effort using ChatGPT but stopped short of connecting it to that country's government. Earlier this month, representatives of the intelligence agency and the contractors who serve them gathered at the America's Center Convention Complex in St. Louis for the GEOINT Symposium, dedicated to geospatial intelligence, the form of tradecraft analyzing satellite and other imagery of the planet to achieve military and intelligence objectives. On May 20, Mulligan took to the stage to demonstrate how OpenAI's services could help U.S. spy agencies and the Pentagon better exploit the Earth's surface. Though the government's practice of GEOINT frequently ends in the act of killing, Mulligan used a gentler example, demonstrating the ability of ChatGPT to pinpoint the location where a photograph of a rabbit was taken. It was nothing if not a sales pitch, one predicated on the fear that some other country might leap at the opportunity before the United States. 'Government often feels like using AI is too risky and that it's better and safer to keep doing things the way that we've always done them, and I think this is the most dangerous mix of all,' Mulligan told her audience. 'If we keep doing things the way that we always have, and our adversaries adapt to this technology before we do, they will have all of the advantages that I show you today, and we will not be safer.'
Yahoo
11 hours ago
- Yahoo
Opinion - Would you hit a dog? Then why hit a child?
In much of the world, we have outlawed physical violence against adults, including the physical punishment of women, prisoners and military recruits. It is also illegal in many places to hit a dog. In the U.S., for example, kicking or hitting a dog can result in criminal charges. And yet, parents' spanking and hitting of children in the name of 'discipline' is legal in the U.S. and in more than 130 other countries around the world. A recent study found that American parents are significantly more likely to consider it acceptable to hit a child than to hit a dog. We are in a cultural moment where physically punishing a dog is viewed as more morally objectionable than doing the same to human children. Despite decades of research showing that physical punishment is harmful and ineffective, its use persists in households around the U.S. and the world. The question is not whether hitting children causes harm, rather, it's why society allows it, knowing that it does. My colleagues and I analyzed data from 195 studies in 92 countries and found no evidence that physical punishment has any benefits. On the contrary, our findings show that physical punishment of children is linked to exclusively negative consequences, including increased aggression, lower academic performance and a higher risk of depression, anxiety and other emotional difficulties later in life. Imagine for a moment that your boss, supervisor or teacher hits you for not meeting expectations. Your immediate response would likely include physical stress reactions such as sweating and a racing heart, as well as emotional responses such as anger, sadness, anxiety or fear. These responses are evolutionary and adaptive, designed to prepare us for fight or flight in the face of threats. When such violence is repeated, it can lead to a state of constant anxiety and fear that the next blow could come at any moment. The same happens to a child. Parents tend to use spanking and other forms of physical punishment with good intentions, hoping to correct or manage children's misbehavior. Yet, the physical stress and emotional responses from physical punishment can be particularly consequential early in life, when brains and biological systems are developing in response to experience. In a neuroscientific study, my team examined brain activity in a group of children who had been spanked in their first 10 years of life, compared to a similar group who had never been spanked. Using fMRI, we showed the children images of happy, neutral and fearful or threatening faces. The children who had been spanked exhibited heightened brain activation in response to fearful/threatening faces, specifically in regions associated with detecting and responding to environmental threats. Other studies have also found reduced cortex gray matter volume in adults who experienced corporal punishment during childhood. Many adults who were hit as children remember it as 'discipline,' not violence, and often insist they 'turned out fine.' But this reasoning overlooks the broader picture. Millions of people around the world smoke without visibly seeing lung damage, yet we widely accept the health risks of smoking because science has made them clear. Similarly, even if physical punishment doesn't leave visible marks, research shows that it significantly increases the risks to children's mental, emotional and developmental health. Some argue that the government shouldn't interfere in private family matters, such as how parents choose to discipline their children. But let's reconsider that argument, and apply it to women. We rightly find it unacceptable for a man to hit his wife, regardless of it being a 'private' matter. Why should it be acceptable to hit children, who are smaller, more vulnerable and entirely dependent on adults for their safety and well-being? Protecting children from harm is not government overreach; it is a fundamental moral and societal responsibility. The right to physical safety that is afforded to adults, including prisoners, soldiers, and even to dogs, should be extended to children. Simply put, all countries should prohibit the physical punishment of children in the home, school and all settings. Such legislation should not be punitive, but written into family codes instead of criminal codes, and paired with educational campaigns, similar to those that shifted social norms around smoking. Additionally, support for parents through initiatives like parenting programs is essential to promote non-violent discipline strategies. We've long stopped justifying hitting adults, and we recoil at hurting an animal. It's time we ensure the same standard applies to children, so we can one day say with pride that they, too, are fully protected from violence. Jorge Cuartas, assistant professor at NYU Steinhardt, is an internationally recognized expert on the health and developmental impacts of physical punishment in childhood. He has authored over 30 scientific articles on the subject, published in leading journals such as Nature Human Behaviour, The Lancet and Child Development. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.