logo
For more than a decade, sea stars died by the millions. Now we know what caused it.

For more than a decade, sea stars died by the millions. Now we know what caused it.

Vox5 days ago
is a senior producer and reporter on Unexplainable, Vox's science podcast. She covers everything scientists don't yet know but are trying to figure out, so her work explores everything from the inner workings of the human body to the distant edges of the universe
'It was like a battleground,' Drew Harvell remembers. 'It was really horrible.'
She's reflecting on a time in December 2013, on the coast of Washington state, when she went out at low tide and saw hundreds of sick, dying sea stars. 'There were arms that had just fallen off the stars,' she says. 'It was really like a bomb had gone off.'
The stars were suffering from something known as sea star wasting disease. It's a sickness that sounds like something out of a horror movie: Stars can develop lesions in their bodies. Eventually, their arms can detach and crawl away from them before the stars disintegrate completely.
Harvell is a longtime marine ecologist whose specialty is marine diseases. And she was out for this low tide in 2013 because a massive outbreak of this seastar wasting had started spreading up and down the West Coast — from Mexico to Alaska — ultimately affecting around 20 distinct species of sea stars and wiping out entire populations in droves. In the decade since, some species have been able to bounce back, but others, like the sunflower sea star, continue to struggle. In California, for example, sunflower stars have almost completely died out.
The question in 2013 was: What, exactly, was killing all these stars? While marine ecologists like Harvell could recognize the symptoms of seastar wasting, they weren't actually sure what was causing the disease. From the very beginning, though, it was something they wanted to figure out. And so, soon after the outbreak started, they collected sea stars to see if they could find a pathogen or other cause responsible for the wasting. The hunt for the culprit of this terrible, mysterious disease was on.
Unfortunately, it was not straightforward.
' When this disease outbreak happened, we knew quite little about what was normal [in sea stars],' says Alyssa Gehman, who is also a marine disease ecologist. She says that when researchers are trying to do similar work to chase down a pathogen in, say, humans, they have an enormous trove of information to draw on about what bacteria and viruses are common to the human body, and what might be unusual. Not so for sea stars. ' We maybe had a little bit of information, but absolutely not enough to be able to really tease that out easily.'
Also, Gehman says, there can be a lag before the disease expresses itself, so some stars have the pathogen that caused the disease, but don't present with symptoms yet, making it harder for scientists to even distinguish between sick stars and healthy ones as they run their tests.
So even though a research team identified a virus that they thought might be associated with the wasting disease as early as 2014, over time, it became clear that it was most likely not the culprit, but rather just a virus present in many sea stars.
'The results were always confusing,' Harvell remembers.
In the decade since the initial mass outbreak, other researchers have proposed other theories, but none have brought them to a definitive answer either. And yet, it became increasingly clear that an answer was needed, because people started to realize just how important the sunflower stars they had lost really were.
' We actually learned a lot from losing so many of these animals at once,' Gehman says.
Before the outbreak, she says, they'd known that sunflower stars — giant sea stars that can be the size of dinner plates, or even bike tires — were skillful hunters and voracious eaters. They even knew that many things on the seafloor would run away from them. Gehman remembers taking a class on invertebrates back in college, where she learned that if you put even just the arm of a sunflower star in a tank with scallops, 'the tank would explode with scallops swimming everywhere trying to get away.'
But all that fearsome hunting was, it seems, pretty key to ecosystem health. In many places, she says, ' after the sea sunflower stars were lost, the urchin populations exploded.'
And so the die-off of the sunflower star and the explosion of urchins has been connected to the collapse of the Northern California kelp forests, a marine ecosystem that provides a home for a rich diversity of species.
A cross-state, cross-organizational partnership between the Nature Conservancy and a variety of research institutions is working hard to breed sunflower seastars in captivity in the hopes that they can be reintroduced to the coast and reassume their role in their ecosystems. But as Harvell remembers, she and Gehman knew that no recovery project would be successful if they couldn't find the cause of sea star wasting disease.
'You're not gonna be able to get these stars back in nature if you don't know what's killing them,' she says.
So in 2021, as part of the larger partnership, Harvell and Gehman, along with a number of their colleagues, launched into an epidemiological detective project. Their quest: to finally pin down the cause of seastar wasting disease.
'Really the work over the four years was done in the trenches by Dr. Melanie Prentice and Dr. Alyssa Gehman,' Harvell says, 'and then one of my students, Grace Crandall.'
It was an emotionally difficult project because it required Gehman and her colleagues to deliberately infect many stars with the disease.
'It feels bad,' she admits, and they would be open about that in the lab, 'but we also can remember that we're doing this for the good of the whole species.'
That work has paid off, though, and now, after four years of research, they've nailed their culprit in a paper out in Nature Ecology & Evolution today.
What follows is a conversation with Drew Harvell, edited for clarity and length, about what she and her collaborators found, how marine ecologists do this kind of detective work, and what identifying the culprit could mean for the future health of seastars.
The underside of an adult sunflower sea star. Dennis Wise/University of Washington
How did you start the journey to figure out what actually had happened?
Well, we chose to work with the sunflower star because we knew it was the most susceptible and therefore was going to give us the most clear-cut results. So we set up at Marrowstone Point, which was the USGS Fisheries virus lab [in Washington state], because that would give us the proper quarantine conditions and lots of running seawater.
The proper quarantine conditions — what does that mean?
All of the outflow water has to be cleansed of any potential virus or bacterium, and so all of the water has to be run through virus filters and also actually bleached in the end, so that we're sure that nothing could get out.
We did not want to do this work at our lab, Friday Harbor Labs, or at any of the Hakai labs in Canada because we were really worried that if we were holding animals with an infectious agent in our tanks without really stringent quarantine protocols, that we could be contributing to the outbreak.
So you have these sea stars. They're in this quarantined environment. What is the methodology here? What are you doing to them or with them?
So the question is: Is there something in a diseased star that's making a healthy star sick? And that's like the most important thing to demonstrate right from the beginning — that it is somehow transmissible.
And so Melanie and Alyssa early on showed that even water that washed over a sick star would make healthy stars sick, and if you co-house them in the same aquarium, the healthy ones would always get sick when they were anywhere near or exposed to the water from a diseased star.
There's something in the water.
That's right. There's something in the water. But they wanted to refine it a little bit more and know that it was something directly from the diseased star. And so they created a slurry from the tissues of the disease star and injected that into the healthy star to be able to show that there really was something infectious from the disease star that was making the healthy star sick and then die.
And then you control those kinds of what we call 'challenge experiments' by inactivating in some way that slurry of infected disease stuff. And in this case, what they were able to do was to 'heat-kill' [any pathogens in this slurry] by heating it up. And so the thing that was very successful right from the beginning was that the stars that were infected with a presumptive disease got sick and died, and the controls essentially stayed healthy.
You do that control to make sure that it's not like…injecting a slurry into a star is what makes them sick?
That's right. And you're also having animals come in sick, right? So you want to know that they weren't just gonna get sick anyway. You want to be sure that it was what you did that actually affected their health status.
So you have a slurry — like a milkshake of sea star — and you know that within it is a problematic agent of some kind. How do you figure out what is in that milkshake that is the problem?
The real breakthrough came when Alyssa had the idea that maybe we should try a cleaner infection source and decided to test the coelomic fluid, which is basically the blood of the star. With a syringe, you can extract the coelomic fluid of the sick star and you can also heat-kill it, and you can do the same experiment challenging with that. And it was a really exciting moment because she and Melanie confirmed that that was a really effective way of transmitting the disease because it's cleaner.
Grace Crandall injects a sea star to expose it to wasting disease at the start of a new experiment. Courtesy Grace Crandall/University of Washington
Drew Harvell holds a sunflower star at UW Friday Harbor Laboratories. David O Brown/Cornell University
The team poses in the lab at the USGS Marrowstone Marine Field Station. From left to right: Alyssa Gehman, Grace Crandall, Melanie Prentice and Drew Harvell. Courtesy Grace Crandall/University of Washington
It's cleaner, like there's less stuff than in the tissue? Like blood is just like a simpler material?
Right. So, that was really the beginning of being able to figure out what it was that was in the coelomic fluid that was causing the disease.
So basically it's like: We're gonna look in every sample in this fluid. There's gonna be sort of an ingredient list. And in the first one, there's ingredients ABC. In the second one, there's ingredients BDF. And in the third one, there's ingredients BYZ… So it seems like it might be ingredient B that's causing the problem here because it's consistent across all samples?
Yeah, that's exactly it. And so then that was very, very incredibly exciting. Wow. There's this one bacterium — Vibrio pectenicida — that's showing up in all of the diseased material samples. Could it be that?
We weren't sure. We sort of thought, after 12 years, this is gonna be something so strange! So weird! You know, something alien that we've never seen before. And so to have a Vibrio — something that we think of as a little bit more common — turn up was really surprising.
Then one of our colleagues at the University of British Columbia, Amy Chan, was able to culture that particular bacterium from the disease star. And so now she had a pure culture of the presumptive killer. And then last summer, Melanie and Alyssa were able to test that again under quarantine conditions and find that it immediately killed the stars that were tested.
How did you all feel?
Oh, we were definitely dancing around the room. It was — just such a happy moment of fulfillment. I really do like to say that at the beginning of the task that Nature Conservancy handed us — to figure out the causative agent — we told them again and again that this is a very risky project. We can't guarantee we're going to be successful.
So yeah, we were incredibly elated when we really felt confident in the answer. It was just hundreds and hundreds of hours of tests and challenge experiments that came out so beautifully.
What does it mean to finally have an answer here? What are the next steps?
This was the part of it that really kept me awake at night because I just felt so worried early on at the idea that we were working on a roadmap to recovery of a species without knowing what was killing it, and I just felt like we couldn't do it if we were flying blind like that.
We wouldn't know what season the pathogenic agent came around. We wouldn't know what its environmental reservoirs were. We didn't know what was making stars susceptible. It was going to be really hard, and it wasn't going to feel right to just put animals out in the wild without knowing more.
And so knowing that this is one of the primary causative agents — maybe the only causative agent — allows us to test for it in the water. It allows us to find out if there are some bays where this is being concentrated, to find out if there are some foods the stars are eating that are concentrating this bacterium and delivering a lethal dose to a star.
Now we'll be able to answer those questions, and I think that's going to give us a really good opportunity to design better strategies for saving them.
It feels like you now have a key to use to sort of unlock various pieces of this.
We totally do. And it's so exciting and so gratifying because that's what we're supposed to do, right? As scientists and as disease ecologists, we're supposed to solve these mysteries. And it feels really great to have solved this one. And I don't think there's a day in the last 12 years that I haven't thought about it and been really frustrated we didn't know what it was. So it's particularly gratifying to me to have to have reached this point.
Drew Harvell is the author of many popular science books about marine biology and ecology, including her latest, The Ocean's Menagerie. She also wrote a book about marine disease called Ocean Outbreak.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The brutal trade-off that will decide the future of food
The brutal trade-off that will decide the future of food

Vox

time11 hours ago

  • Vox

The brutal trade-off that will decide the future of food

is a deputy editor for Vox's Future Perfect section. Before joining Vox, she reported on factory farming for national outlets including the Guardian, the Intercept, and elsewhere. Perhaps the most crucial idea for understanding our species' future on this planet boils down to two boring words: land use. To mitigate climate change, humans will need to extract critical minerals to build vast numbers of photovoltaic cells and wind turbines. We'll need millions of tons of copper to wire continent-spanning power grids. But the most immutable resource constraint we face — the one we can't mine more of — is land. Although many of us don't see it, because most humans now live in urban areas, the story of land constraints is really a story about agriculture, which devours nearly half of our planet's habitable land; urban and suburban areas take up only a tiny fraction. Processing Meat A newsletter analyzing how the meat and dairy industries impact everything around us. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. We're not using all that farmland very wisely. Beef farming, for example, occupies 'nearly half the world's agricultural land to produce just 3 percent of its calories,' the journalist Michael Grunwald writes in his new book, We Are Eating the Earth. In part because it consumes so much land, agriculture contributes between a quarter and a third of all greenhouse gas emissions, and as humanity's numbers climb, its footprint will swell. 'If current trends hold, the world's farmers will clear at least a dozen more Californias' worth of land to fill nearly 10 billion human bellies by 2050,' Grunwald writes. Grunwald's book — a lively, reportorial world tour through the misunderstood science and politics of agriculture, often explained via Gen X movie references — is among a slate of new titles that I like to think of as the abundance agenda of food. Abundance, Ezra Klein and Derek Thompson's bestselling it-girl of wonk manifestos, shares intellectual DNA with a growing set of ideas bringing supply-side economic principles to the future of farming. Just as we can't solve the housing crisis or the green energy gap with a politics of scarcity, we can't fix agriculture's planetary impact by simply producing less food. We have to grow enough food to affordably and sustainably feed a world of 8 billion and counting. And because there's a hard limit on land, that means figuring out how to squeeze more food out of our precious acreage. The proposed solutions might surprise you. They are not crunchy farming philosophies like local agriculture or so-called regenerative ranching — woefully inefficient, low-productivity systems that, if deployed at scale, would mean mowing down the world's remaining forests, accelerating climate change and mass extinction. That's because wild, carbon-sequestering ecosystems are our best natural defenses against climate change, which is something that no agricultural pattern can replicate. 'Every farm, even the scenic ones with red barns and rolling hills that artists paint and writers sentimentalize, is a kind of environmental crime scene,' Grunwald writes. And today, 'global agriculture is shifting south, toward tropical forests and wetlands that are the world's most valuable carbon sinks,' like the Amazon. That means the most important determinant of agriculture's planetary impact is how much land it sucks up — what Grunwald calls 'the eating-the-earth problem.' By this measure, conventional, intensive, industrial crop farming like that practiced across the US, and heavily criticized by many environmentalists, outperforms organic agriculture or low-yield farming common in low-income countries, for the simple reason that it produces the most food on the least land (though there is, to be sure, nuance to this debate). We Are Eating the Earth is joined by a grumpier, more academic provocation on food sustainability. Food Fight, by UC Davis agricultural economist Richard Sexton, decries the policies being implemented around the world, often in the name of helping the environment, that will make farming less productive and less sustainable, and food more expensive. 'Never have governments actively intervened to implement policies guaranteed to reduce food production the way they do today and promise to do into the future,' he argues, dismantling approaches ranging from senseless ethanol mandates in the US and elsewhere to Europe's pro-organic and anti-GMO policies. These are intelligent, highly timely books that get many things right, surfacing the misguided pastoral fantasies and fatal misunderstandings of land use that make it hard for us to pursue sane agricultural policies. They inspire due respect for a modern industrial food system that, for all its problems, has achieved spectacular feats of productivity necessary to support a planet of billions of people. But their emphasis on intensification also leads them somewhere far more ominous: a defense of the worst part of our food system, one that will lead to ever-more horrifying levels of suffering and death. The rise of anti-anti-factory farming Repairing our food system is so confoundingly difficult in part because it often feels more intractable than it needs to be. We already know we could alleviate a lot of the problem by eating less meat and dairy — the food equivalent of coal power — and more plants, but convincing consumers to do that through either policy or suasion is really, really hard. (Believe me, I try). 'One American pollster told me meat taxes were the most unpopular policy he ever surveyed, 'up there with veterans' benefits for ISIS,'' Grunwald grimly remarks. And one of the surest bets you can place on the future, as both authors point out, is that as people in low- and middle-income countries become richer, they will eat lots more animal products. Humans already slaughter an eye-watering 80 billion land animals per year, a number that will continue to soar. Resigned to that dismal reality, both We Are Eating the Earth and Food Fight reflect an idea that's increasingly prevalent in future of food debates — that factory farms, despite their cruelty, are a necessary evil. Call it anti-anti-factory farming. The reasoning is straightforward enough. Animal agriculture takes up lots of land and resources — that's why meat is bad for the environment in the first place. The only way to produce it at scale without blowing up climate targets and clearing rainforests is to raise animals as intensively as possible through what's called 'sustainable intensification.' Factory farms don't exist merely to be evil, after all, but rather because they produce animal products with the fewest possible inputs. Just as much as these books puncture Michael Pollan-esque pipe dreams of feeding the world with pasture-raised steak, they also have little patience for animal rights activists who want to regulate factory farming out of existence. Thus included on Sexton's list of misguided policies are animal welfare laws like California's Proposition 12, which ban some of the most extreme forms of confinement for farm animals, including caging female breeding pigs in crates so small they're comparable to spending an entire human life trapped inside a coffin. 'Policies being imposed in the name of animal welfare reduce the productivity of these animals and raise the costs of producing animal products,' he writes. In fairness to Sexton, whom I have an enormous amount of respect for and have interviewed for numerous stories, he suggests what he argues is an alternative, less costly route to achieving the welfare benefits of Prop 12. 'I like animals and want them to be treated well,' he writes. Pigs housed in gestation crates. Jo-Anne McArthur/We Animals Media Grunwald more gingerly suggests factory farms remain an inevitable, if inhumane and not ideal, part of food production. In a controversial New York Times essay last December, he argued, 'the inconvenient truth is that factory farms are the best hope for producing the food we will need without obliterating what's left of our natural treasures and vaporizing their carbon into the atmosphere.' One of the surest, most realistic ways to reduce meat's outsize land and carbon footprints this century, Grunwald writes, is for diets to replace beef with poultry and pork, which are far lower in climate impact. But that trade would be morally calamitous — it takes far more individual chickens and pigs to produce the same amount of meat as beef cattle, and those animals are treated far worse. The anti-anti-factory farming ethos is certainly a way of looking at our food system. There's a brutal logic to it that anti-factory farm advocates have to learn to contend with. Take dairy cows as just one illustrative example: Breeding them for maximal productivity has meant that 'since World War II, the US dairy herd has shrunk by two-thirds, yet produces two-thirds more milk,' Grunwald writes. Had that not happened, we'd have more dairy cows emitting more greenhouse gases, and we'd likely have cleared more land and harmed more ecosystems to grow the crops that feed them. In much of the rest of the world, dairy herds are much less productive, thereby consuming more resources and polluting the climate more for every gallon of milk produced. Yet America's hyper-productive turbo-cows have come at a severe cost to animal welfare. Dairy cows are some of the most miserable animals in our food system: Like all mammals, they only make milk after giving birth, to feed their babies, but they've been bred to produce far more than a calf would drink. These astronomical yields destroy the dairy cow's body, forcing her to channel 'freakish' amounts of energy into milk production, as the food historian Anne Mendelson has written. (One might argue that the counterfactual would be even worse: a world with more, less-productive dairy cows, each enduring a life of continual pregnancy and separation from their calves. Nevertheless, the sheer extremity of the modern turbo-cow's suffering, and the prospect of bringing many more of them into the world, crosses a moral threshold.) Related The life of a dairy cow All this for a food that still, even after cows have been pushed beyond the limits of decency, remains significantly worse for the environment than simply eating plant-based foods. So is industrial milk really a win for the planet? Is there a way out? One of my favorite visualizations the global food system comes from Our World in Data: Plant-based foods — that is, everything that's not meat, dairy, and eggs — already supply more than 80 percent of the world's calories, and nearly two-thirds of our protein, with just 16 percent of global agricultural land. One conclusion you could draw from this chart is that animal agriculture is so inherently inefficient — we grow feed crops to raise animals that we then slaughter to feed ourselves — that we have to work hard to find ways of making it more productive. Another way of looking at it is that animal agriculture is so inefficient — and, by the way, it comes at an unthinkable moral cost, and it might start the next pandemic — that it would be the definition of lunacy to squander limited global carbon budgets to produce an ever-greater share of our food this way. But there's no single council of humanity that can make that decision for our species — only billions of individuals making market choices. And they have shown every sign that they are going to keep eating meat. So Grunwald calls for an all-of-the-above approach. We Are Eating the Earth roots for the success of meat alternatives like plant-based and cell-cultivated meat — and it made me feel more optimistic about their future than I have in a long time — just as much as it embraces intensive animal production. Innovation can also make intensive crop agriculture more planet-friendly, as Grunwald explores, by making it less dependent on inputs that harm wildlife, like chemical pesticides. The logic of anti-anti-factory farming genuinely challenged me, because as impossible as its choices feel — do we torture several billion more animals per year, or let the Amazon burn? — they are real trade-offs that policymakers face every day. It's hard to compare the despoiling of irreplaceable ecological wonders to the infernal horror of the factory farm according to a cost-benefit analysis, because they feel incommensurate. But if we tried to do it honestly, I'm not sure the answer would be as clear as factory farming's defenders suggest. Their case only works because food systems analysis sees animals as economic inputs, not much different than a bushel of wheat, rather than as who they really are. It doesn't seriously engage with what it really means to farm animals for food — the incessant pain of a modern broiler chicken, or the mind-numbing despair of a caged mother pig used as a reproductive machine. So let me offer one more new book recommendation: my friend the philosopher John Sanbonmatsu's The Omnivore's Deception. Another rebuke of Michael Pollan and his defense of eating animals, it's the rare book that unshrinkingly names our tyranny over animals as a 'civilizational error,' as Sanbonmatsu writes. It's 'about what happens when we organize our society, economy, and daily lives around a radical evil, then engage in self-deception to keep the truth of that evil from ourselves.' We Are Eating the Earth is, to a great extent, a work of unsentimental pragmatism, which makes the spirited case for principled idealism in the book's final moments all the more potent. Sometimes progress depends on a 'refusal to read the room and stop saying things nobody wanted to hear,' Grunwald writes. 'It pays to keep working and fighting the good fight, because maybe something good will happen. Maybe it won't, but if you don't keep working and fighting, it definitely won't.' We should look at animal agriculture the same way. We could continue turning our planet into a giant factory farm, but then, what are we even doing all this for? If we continue to ignore one of the greatest atrocities of our time — and expand it even further — what would be the point of building such a world? All over the globe, there are animal advocates urging their fellow humans to change course, and the only way we'll feel our way out of the factory farm trap is to commit to that task. We don't know if we'll ever convince humanity to abandon the 'radical evil' of factory farming, but it would be an abdication to give up trying.

How 'the Grim Reaper effect' stops our government from saving lives
How 'the Grim Reaper effect' stops our government from saving lives

Vox

timea day ago

  • Vox

How 'the Grim Reaper effect' stops our government from saving lives

is a senior correspondent and head writer for Vox's Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy. Last summer, the Congressional Budget Office released a report under the unassuming name 'Budgetary Effects of Policies That Would Increase Hepatitis C Treatment.' I read it because I am the type of person who is interested in the budgetary effects of policies that would increase hepatitis C treatment. Embedded in the report, though, was a point that will be important for just about anything the federal government tries to do to save the lives of Americans. Hep C is a nasty viral infection whose effects are, for a virus, unusually long-lasting. Untreated, it causes serious liver damage over the course of decades, leading to much higher rates of cirrhosis and liver cancer, all of which is very expensive to treat. But in the 2010s, a number of extremely effective antivirals, which randomized trials show cure upwards of 95 percent of chronic infections, came on the market. Like most new drugs, these antivirals are under patent and quite expensive; as of 2020, the cost of an eight-to-twelve week course of the drugs, usually enough to cure an infection, was between $11,500 and $17,000. Yet CBO concludes that the drugs are so effective, and the costs of treating patients with hep C who haven't been cured are so massive, that expanding treatment with these drugs reduces federal spending on hep C treatment and associated complications overall. Doubling the number of Medicaid patients getting the drugs would increase federal spending by $4 billion over 10 years. But over the same decade, the federal government would save $7 billion through reduced need for treatments like liver transplants and ongoing care for chronic cases. Put like that, this starts to sound like one of the rarest discoveries in federal budgeting: a free lunch. That means a policy that is good on its own merits (saving lives and preventing debilitating chronic disease) but also saves the government money. But the most interesting part of the report to me comes at the end. 'An increase in hepatitis C treatment could also affect the federal budget in other ways—for example, by leading to improved longevity and lower rates of disability,' the authors note. The latter point is pretty straightforward: If hepatitis C leads to disabilities that make people eligible for disability insurance and subsidized health coverage, then reduced hep C means lower spending on those programs. But (and this is me speculating, so blame me and not the CBO if I'm wrong) that effect is probably swamped by that of 'improved longevity.' Simply put: curing hep C means people live longer, which means they spend more years collecting Social Security, Medicare, and other benefits. That could mean that whatever cost savings the actual hep C treatment produces might be wiped out by the fact that the people whose lives are being saved will be cashing retirement checks for longer. I like to call it the Grim Reaper effect. The US runs a large budget deficit. It also provides far more generous benefits to seniors than to children or working-age adults. Per the Urban Institute's regular report on government spending for children, the ratio of per capita spending on senior citizens to per capita spending on children is over 5 to 1. Put together, the deficit and the elder-biased composition of federal spending implies something that is equally important and macabre: helping people live longer lives will, all else being equal, be bad for the federal budget. In an increasingly aging country, hep C is not the first place where the Grim Reaper effect has been felt, and it won't be the last. I don't have an easy fix for the situation, but it feels important to at least understand. Logan's Run economics One of the first and clearest cases of this longevity dilemma in budgeting came with cigarettes. The history of mass cigarette smoking in the US is surprisingly short. Per the CDC, American adults were only smoking 54 cigarettes annually per capita as of 1900. By 1963, that number had grown to 4,345. The development of automatic rolling machines, milder forms of tobacco, and mass marketing meant millions of working and middle-class Americans became pack-a-day smokers. But while the per capita average floated around 4,000 from the late '40s to the early '70s, it then began a precipitous decline. In 2022, the most recent year for which the Federal Trade Commission released data, Americans bought 173.5 billion cigarettes, or 667 per adult, less than a sixth of the peak, while fewer than 12 percent of American adults now smoke. Cigarettes are, of course, deadly, but they kill with a lag, usually after decades of regular smoking. That meant that in the late 1980s and 1990s, the US started to hit peak cigarette deaths, as adults who came of age during the smoking era started to get lung cancer and emphysema en masse, at numbers that less-addicted subsequent generations wouldn't match. The male death rate from lung cancer peaked in 1990, and the female death rate peaked in 1998. A flurry of economic research at the time tried to make sense of what this meant for the federal budget. Smoking harms your health. But it also shortens your lifespan. A useful 1998 Congressional Budget Office report noted that most research found that, over their lives, smokers spend more in health care costs (including more that goes on the federal tab) than non-smokers, even accounting for their shorter lifespans. But that picture changed once you added in pensions and other non-health spending. Economists John Shoven, Jeffrey Sundberg, and John Bunker in 1989 estimated that the average male smoker saved Social Security $20,000 (about $60,000 today) in benefits not paid. The figure for women, who live longer than men on average but earn less in wages and thus in Social Security, was about half that. 'It seems likely that the Federal budget currently benefits from smoking,' two Congressional Research Service researchers concluded in 1994, when the 'benefits' of early death to Social Security and Medicare were included. Malcolm Gladwell, in a thoughtful 1990 treatment of the problem in the Washington Post, was catchier: 'Not Smoking Could be Hazardous to Pension System.' Decades later, the CBO did a fuller analysis of the budgetary consequences of smoking in the aftermath of the large cigarette tax increase President Obama signed in early 2009 and proposals for further hikes. At first blush, the revenue raised from a cigarette tax should be easy to estimate: multiply annual cigarette sales by the amount of the tax. But obviously raising the price of the good will reduce the amount people buy; one major reason for cigarette taxes, after all, is to deter smoking. The CBO used a price elasticity of -0.3, meaning that a 10 percent increase in cigarette prices reduces the number sold by 3 percent. But the 2012 report was meant to go a step or two further, according to then-director Doug Elmendorf, who explained the backstory in a recent conversation with me. 'The effects of making people healthier are good for those people, obviously, but also perhaps good for the federal budget because the federal government pays for a lot of health care. If you're healthier, you don't need so much health care.' But at the same time, 'It was clear that if people were healthier, they would live longer, and that could have budgetary costs. It wasn't obvious offhand what the balance of those effects would be.' The 2012 CBO report tried to put all these effects together: the effect of lower smoking on reducing health-care spending (including government-funded spending) due to a healthier population, the effect on Social Security and other benefit spending from resulting longer lifespans, the effect of lower smoking rates on wages, and tax revenue from those wages. (The latter is often not included in formal CBO scores, as it tips closer to 'dynamic' scoring where the effect of legislation on the overall economy is included.) Over the first 10 years after a hike in the cigarette tax, they found that having a healthier population was more of a blessing than a curse, budget-wise. The health effects of a cigarette tax hike reduced federal health spending by over $900 million over a decade, even after accounting for people living longer and claiming more years of Medicare. By contrast, retirement programs only spent $183 million more because people lived longer. Swamping all that was a $2.9 billion increase in tax revenue from a healthier population capable of working and earning more. But that's just the 10-year effect. As the decades pass, the effect of longevity would grow and grow. First, Medicare costs would start to rise, as the cost of a longer-lived population began to swamp the cost savings of that population being healthier overall. (Even people who've been healthy for a long time can run up major health spending at the end of their now longer lives.) Social Security costs would keep rising, too. Fifty years in, these costs would overwhelm the benefits, and the cigarette tax's health effects would start costing the budget, on average. The point isn't 'cigarette taxes are good' or 'cigarette taxes are bad.' The point is that even a policy that saves lives isn't necessarily a slam dunk from the hard-eyed perspective of budget policy. Recent years provided a possibly even darker example. In 2022, the Medicare Trustees pushed back the date they expected the program's Hospital Insurance Trust Fund to be depleted by two years. They had several reasons, but a major one was that Covid-19 had killed hundreds of thousands of Medicare patients prematurely. Not only that, but 'Medicare beneficiaries whose deaths were identified as related to COVID had costs that were much higher than the average Medicare beneficiary prior to the onset of the pandemic.' Put another way: Covid killed off Medicare's sickest, and most expensive, enrollees. That meant the program was left with an overall healthier population, which by itself lowered medical costs by 2.9 percent in 2021. Similarly, a paper by a team of health economists earlier this year estimated that the 1.4 million excess deaths in the US due to Covid had the net effect of boosting the Social Security trust fund to the tune of $156 billion. That represented $219 billion in benefits that no longer needed to be sent, minus $44 billion in lower payroll tax revenues and $25 billion in new benefits to surviving family members. It all reminds one of Logan's Run, in which people are killed off upon hitting age 30 lest they take up too many of society's resources. That movie is a dystopia — but as a budget proposal, it'd score very well. It's good to save lives, actually The economists and agencies doing this math are, of course, only doing their jobs. We need to know what government programs will cost over the near- and long-run. These effects on health and life and death matter to those calculations. 'Members of Congress regularly thought that we were ghoulish for talking about how, if people live longer, there'll be higher benefits for Social Security,' Elmendorf recalls. 'But it's not ghoulish. Obviously, we want to live longer and members of Congress should try to help all Americans live longer. CBO's job — an analyst's job in general — is just to be honest about the likely effects.' But the fact that increased human longevity on its own worsens the budget picture should lead to some reflection. For one thing, it suggests that sometimes we should embrace policies simply because they're the right thing to do, even if they don't pay for themselves. Recall the hepatitis C treatments that prevent expensive long-term expenses for Medicaid, but might add on new costs by extending the benefits' lifespans. It's possible that, upon taking the latter into account, expanding access to hep C drugs costs the government money on net. It's a free lunch no longer. That's not a reason not to embrace the policy, though. Lots of things the government does cost money. The military doesn't pay for itself. K–12 schools don't pay for themselves. Smithsonian Museums don't pay for themselves. That doesn't mean those aren't important functions that it makes sense to put some of our tax dollars toward. Hep C treatment, I think, fits in that list, even if it's not literally free from a budget standpoint. Congress should also allow agencies like the CBO to do more to symmetrically account for the positive budgetary effects of longevity, along with the negatives. People who live longer, after all, often earn wages in those new years of life, wages that generate income and payroll tax revenues for the federal government. Moreover, people at the end of their careers are earning more money and hence paying more taxes than young people, meaning life extension helping people in their 50s and 60s might be especially good for tax revenue. The problem is that the CBO generally considers 'how many workers paying taxes are there' to be an economic effect and only considers it in special 'dynamic' scores of legislation, in which the economic consequences of them are taken into account. Dynamic scoring has been a topic of great controversy for decades, going back at least to the Bush II administration, but the rule Congress sets for CBO on when to use dynamic scoring results in CBO applying dynamic scoring very rarely in practice. A middle ground option, though, would be something called 'population change' scoring, in which CBO considers the direct effects of a change in the population (through longer lifespans, say, or immigration) on the level of employment and tax revenue, without doing a full, more complicated dynamic score. That would make its accounting of the effects of longer lives less biased: the budgetary benefits would be counted alongside the costs. We should also consider the aspects of our budget situation that make the longevity effect a reality. One is the US's long-standing, bipartisan choice to run massive budget deficits, even during relative boom times. One arithmetic consequence of that choice is that it makes the continued existence of every American a net loss for the country's books. That's not the main reason to avoid large deficits during booms, but it's a somewhat toxic byproduct all the same. The other aspect driving this effect is the choice to invest government resources very heavily in seniors relative to other age groups. This is due in large measure to the US choice to provide universal health care for seniors but not other age groups, and due to our lack of investment in very young children and working-age adults compared to other rich nations. There is no law of nature saying the US has to weigh its priorities that way. As long as we do, the numbers will imply that it's better for the budget for people to die before they get old.

Headphones that scan your brainwaves and keep you focused? It's not science fiction.
Headphones that scan your brainwaves and keep you focused? It's not science fiction.

Vox

time2 days ago

  • Vox

Headphones that scan your brainwaves and keep you focused? It's not science fiction.

is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. The MW75 Neuro headphones are primarily used to sharpen your attention — with the new and added benefit of giving you a snapshot of your brain health. Paige Vickers/Vox; Getty Images; Neurable For the past few months, when I really needed to get something done, I put on a special pair of headphones that could read my mind. Well, kind of. The headphones are equipped with a brain-computer interface that picks up electrical signals from my brain and uses algorithms to interpret that data. When my focus starts to slip, the headphones know it, and an app tells me to take a break. It sounds like something out of science fiction, but a decade-old startup called Neurable is pioneering the technology, and it's preparing to put the brain-tracking tricks into more gadgets. Earbuds, glasses, helmets — anything that can get an electrode near your head could provide a real-time stream of data about what's going on inside of it. Neurable's technology uses a combination of electroencephalography (EEG) sensors to collect brain data and algorithms to interpret those signals. Beyond measuring attention, the company is now using that data to track and improve brain health. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. I want to emphasize again that this technology does not actually read your mind in the sense of knowing your thoughts. But, it knows when you're entertained or distracted and could one day detect symptoms of depression or, on a much more consequential front, early signs of Alzheimer's disease. I came across Neurable on a longer mission to understand the future of health-tracking technology by testing what's out there now. It's one that left me anxious, covered in smart rings and continuous glucose monitors, and more confused about the definition of well-being. That's because almost all health trackers that are popular on the market right now — Apple Watches, Oura Rings, Whoop Bands — are downstream sensors. They measure consequences, like elevated heart rate or body temperature, rather than the root cause of that state. By tapping directly into your brainwaves, a brain-computer interface can spot issues sometimes years before they would show up. It could one day detect symptoms of depression or, on a much more consequential front, early signs of Alzheimer's disease. 'Biologically, your brain is designed to hide your weaknesses: It's an evolutionary effect,' Neurable's co-founder and CEO Ramses Alcaide, a neuroscientist, told me. 'But when you're measuring from the source, you pick up those things as they're occurring, instead of once there's finally downstream consequences, and that's the real advantage of measuring the brain.' Other major tech companies are also exploring ways to incorporate non-invasive brain-computer interfaces into headphones. A couple years ago, Apple quietly applied for a patent for an AirPod design that uses electrodes to monitor brain activity, and NextSense, which grew out of Google's moonshot division, wants to build earbud-based brain monitors for the mass market. There's also been a recent boom in activity around invasive brain-computer interfaces being developed by companies like Elon Musk's Neuralink and even Meta that surgically implant chips into people's brains. It's safe to say that's not currently a mass-market approach. Still, while all of those mega market cap companies ponder the possibilities of their own brain-powered projects, Neurable's is on the market. It's on my head right now, actually, and it works. The cutting edge of neurotech The Master & Dynamic MW75 Neuro — the $700 pair of headphones I tested — looks like any other set of noise-canceling headphones, except for the badge that reads, 'Powered by Neurable AI.' When you connect them to the Neurable app is when things get fun. Inside the Neurable app is a little video game that lets you fly a rocket ship with your brain — and serves as a proof of concept. The trick is you have to focus on a set of numbers on the screen. The more intensely you focus, the higher the numbers go, and the faster the rocket ship flies. If you start to get distracted by, say, thinking about flying an actual rocket ship, the numbers go down, and the rocket ship slows. It's one of the coolest innovations I've ever seen, if only because it's so simple. The EEG sensors in Neurable's products can pick up a range of brainwave frequencies, which are associated with different behaviors and activities. The beta frequency band provides some information about attention state as well as anxiety, while alpha indicates a mind at rest. While EEG sensors and brain-computer interfaces are most often seen in labs, putting these sensors into a device that people wear every day stands to transform our understanding of the mind. 'Non-invasive EEG is cheap and completely safe,' said Bin He, a professor of biomedical engineering at Carnegie Mellon University, whose lab built a drone you can fly with your mind over a decade ago. 'AI, or deep-learning technology, however has drastically improved the performance of [brain-computer interfaces] to read the minds of individuals.' If you changed the technology's mission from measuring focus to, say, symptoms of depression, you could imagine how an everyday gadget could offer some life-changing interventions. The possibilities are as endless as the list of issues that can affect the brain. The Pentagon has been using Neurable's portable technology to study traumatic head injuries in soldiers, for instance, and that research could have practical applications in sports. Alcaide also mentioned Alzheimer's and Parkinson's as potential targets for their technology. Symptoms for these diseases don't appear for years after onset, but early markers could show up in the kind of EEG data their technology captures from everyday wear. If you changed the technology's mission from measuring focus to, say, symptoms of depression, you could imagine how an everyday gadget could offer some life-changing interventions. For now, however, the MW75 Neuro headphones are primarily used to sharpen your attention — with the new and added benefit of giving you a snapshot of your brain health. This involves starting a session with the headphones on and letting the sensors collect the electrical signals your brain's sending off. Your focus is measured as low, medium, or high, and when you're flagging for a while, the app will prompt you to take a break. You can also turn on a feature called Biofeedback, which plays music of varying intensity in order to nudge your focus toward the high range. The Brain Health reports are still in beta mode but will show you daily estimates of how you're doing in terms of things like anxiety resistance, cognitive speed, and wakefulness. The way you know that the device isn't actually reading your mind comes down to science and a strong data policy. Neurable's technology picks up raw voltage — not actual thoughts — from your neurons and uses AI to decode the data and identify signals associated with focus, the company's co-founder Adam Molnar explained to me recently. Neurable encrypts and anonymizes the data coming out of your head and onto its sensors and then again when it goes to your phone, so it's far removed from any personal data. Furthermore, he said, Neurable has no ambitions to be a data company. 'Our business model doesn't depend on identity. We don't sell ads. So there's no benefit,' Molnar said. 'It's actually more of a liability for us to be able to have data map back to an individual.' It's hard for me to say how much more productive I became thanks to the brain-reading headphones. As with many other health trackers, there's sort of a placebo cat effect: Simply deciding to track the behavior changed my state of mind and made me behave a certain way. So, setting up a focus session inevitably made me pay closer attention to how well I was focusing, how often I took breaks, and if I was choosing to be more mindful. This is actually what makes me so curious about an earbud version of what Neurable's doing. I wear AirPods for most of the day, whether it's taking calls for work, listening to podcasts, or just drowning out the sounds outside my Brooklyn apartment. If these earbuds were also collecting data about my cognitive well-being during all those activities, I'd be interested in knowing what I could glean from that information, if only to better understand what's rotting my brain. And I'm sure plenty of companies would be happy to collect more data about their users' states of mind at any given time. Imagine if the TikTok algorithm knew you weren't interested in something — not because you swiped through it but rather because your brainwaves said so. Neurable's website has mockups of EEG-equipped earbuds, helmets, and smart glasses, and it's clear that the company is eager to move beyond its first product. The company doesn't just want to make gadgets, either. It wants to be the leading platform for brain-powered technology. 'Just like Bluetooth is in every single device, and everyone should have access to Bluetooth, we believe that everyone should have access to neuro tech,' Alcaide told me. We're years away from the most far-fetched applications of brain-computer interfaces, but we're heading in that direction. 'There's so many things you can do with neuro tech, whether it's tracking health conditions, whether it's controlling devices, whether it is understanding yourself better,' he said. 'It would be a disservice to the world if the only solutions that came out were our own.' Neurable is indeed one of many startups trying to bring neuro tech to the masses, although they're the only ones selling a product I'd actually wear in public. Several other EEG-based gadgets out there take the form of headbands, many of which are geared toward sleep health or meditation. A company called Emotiv, which also partnered with Master & Dynamic, will start selling its own EEG-equipped earbuds this fall. It remains to be seen if and when Apple will make brain-reading AirPods, but they've already partnered with a brain interface startup called Synchron, which allows people to control iPhones with their minds (Haven't you always wanted to become one with your iPhone?). This is where we circle back to the point where science fiction meets reality. We're years away from the most far-fetched applications of brain-computer interfaces, but we're heading in that direction. Whether that future ends up looking miraculous or like a Black Mirror episode is up to us — and to the companies, like Neurable, pioneering it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store