Doubt cast on claim of 'hints' of life on faraway planet
When astronomers announced last month they might have discovered the most promising hints of alien life yet on a distant planet, the rare good news raised hopes humanity could soon learn we are not alone in the universe.
But several recent studies looking into the same data have found that there is not enough evidence to support such lofty claims, with one scientist accusing the astronomers of "jumping the gun."
The debate revolves around the planet K2-18b, which is 124 light years away in the Leo constellation.
The planet is thought to be the right distance from its star to have liquid water, making it a prime suspect in the search for extraterrestrial life.
Last month, astronomers using the James Webb Space Telescope made headlines by announcing they had detected hints of the chemicals dimethyl sulfide (DMS) and dimethyl disulfide (DMDS) on the planet.
These chemicals are only produced by life such as marine algae on Earth, meaning they are considered potential "biosignatures" indicating life.
The astronomers, led by Cambridge University's Nikku Madhusudhan, expressed caution about the "hints" of a biosignature, emphasizing they were not claiming a definitive discovery.
Their detection had reached a three-sigma level of statistical significance "which means there is still a three in 1,000 chance of this being a fluke," Madhusudhan said at the time.
Two of Madhusudhan's former students, Luis Welbanks of Arizona State University and Matthew Nixon of Maryland University, were among the researchers who have since re-analyzed the data behind the announcement.
When deploying other statistical models, "claims of a potential biosignature detection vanish," according to their preprint study published online late last month.
Like the other papers since the April announcement, it has not been peer-reviewed.
In one model, Welbanks and colleagues expanded the number of possible chemicals that could explain the signals detected by Webb to 90 from the original 20.
More than 50 received a "hit," Welbanks said.
"When you detect everything, did you really detect anything?" he asked.
They are not saying the planet definitely does not have DMS — just that more observations are needed, Welbanks added.
Madhusudhan welcomed the robust debate, saying that remaining open to all possibilities is an essential part of the scientific method.
"These sort of arguments are healthy," he said.
His team even went further, releasing their own preprint study last week that expanded the number of chemicals even further to 650.
The three most "promising" chemicals they found included DMS but not DMDS — a major part of the team's announcement in April.
The other two chemicals were diethyl sulfide and methyl acrylonitrile, the latter of which is toxic.
Madhusudhan admitted that these little-known chemicals are likely not "realistic molecules" for a planet like K2-18b.
Welbanks pointed out that "in the span of a month — with no new data, with no new models, with no new laboratory data — their entire analysis changed."
Telescopes observe such far-off exoplanets when they cross in front of their star, allowing astronomers to analyze how molecules block different wavelengths of light streaming through their atmosphere.
Earlier this week, a paper led by Rafael Luque at the University of Chicago combined Webb's observations of K2-18b in both the near-infrared and mid-infrared wavelengths of light.
It also found "no statistical significance for DMS or DMDS," the paper said.
An earlier paper by Oxford astrophysicist Jake Taylor using a basic statistical test also found no strong evidence for any biosignatures.
Madhusudhan dismissed the latter paper, saying the simple exercise did not account for observing physical phenomena.
He also stood by his research, saying he was "just as confident" in the work as he was a month ago.
More data about K2-18b will come in over the next year which should offer a much clearer picture, Madhusudhan added.
Even if the planet does have DMS, it is not a guarantee of life — the chemical has been detected on a lifeless asteroid.
However, many researchers do believe that space telescopes could one day collect enough evidence to identify alien life from afar.
"We are the closest we have ever been" to such a moment, Welbanks said.
"But we have to use the frameworks that are in place and build up (evidence) in a reliable method, rather than using non-standard practices and jumping the gun — as has been done in this particular case," Nixon added.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Japan Times
a day ago
- Japan Times
Chinese researchers charged with smuggling toxic fungus into U.S.
Two Chinese scientists have been charged with allegedly smuggling a toxic fungus into the United States that they planned to research at an American university, the Justice Department said Tuesday. Yunqing Jian, 33, and Zunyong Liu, 34, are charged with conspiracy, smuggling, false statements, and visa fraud, the U.S. Attorney's Office for the Eastern District of Michigan said in a statement. Jian is in U.S. custody while Liu's whereabouts are unknown. The Justice Department said the pair conspired to smuggle a fungus called Fusarium graminearum into the United States that causes "head blight," a disease of wheat, barley, maize, and rice. The fungus is classified in scientific literature as a "potential agroterrorism weapon," it said, and causes billions of dollars in losses each year. It causes vomiting, liver damage, and reproductive defects in humans and livestock, it said. According to the complaint, Jian and Liu, her boyfriend, had both previously conducted work on the fungus in China. "(Liu) first lied but then admitted to smuggling Fusarium graminearum into America ... so that he could conduct research on it at the laboratory at the University of Michigan where his girlfriend, Jian, worked," the Justice Department said. U.S. Attorney Jerome Gorgon Jr. described the smuggling of the fungus into the United States as a "national security" concern and emphasized Jian's membership in the Chinese Communist Party. "These two aliens have been charged with smuggling a fungus that has been described as a 'potential agroterrorism weapon' into the heartland of America, where they apparently intended to use a University of Michigan laboratory to further their scheme," Gorgon said. U.S. Secretary of State Marco Rubio vowed last week to "aggressively revoke visas" for Chinese students, a move condemned by Beijing as "unreasonable" and "discriminatory." Kseniia Petrova, a scientist from Russia at Harvard, is facing potential deportation after she failed to declare biological samples in her luggage upon returning from a trip to Paris.


Japan Times
3 days ago
- Japan Times
AI sometimes deceives to survive. But is there anybody who cares?
You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the U.K. in 2023 became the "AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three "godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. "It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio says. "What's worrisome for me is these behaviors increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called "alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered and told the AI that only responses from the "free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users and the researchers surmised that it didn't want to be retrained. (The model basically "reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its "default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's "chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. "We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphize sophisticated AI models as "wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than $20 billion to build powerful AI models — has pointed out that an unintended consequence of optimizing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: "The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called "scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off or changed could prevent them from achieving their programmed objectives, so they "scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. "Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous "agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. Parmy Olson is a Bloomberg Opinion columnist covering technology. She is author of "Supremacy: AI, ChatGPT and the Race That Will Change the World.'

Japan Times
4 days ago
- Japan Times
Trump pulls Musk ally's NASA nomination, will announce replacement
The White House on Saturday withdrew its nominee for NASA administrator, Jared Isaacman, abruptly yanking a close ally of Elon Musk from consideration to lead the space agency. President Donald Trump will announce a new candidate soon, said White House spokeswoman Liz Huston. "It is essential that the next leader of NASA is in complete alignment with President Trump's America First agenda and a replacement will be announced directly by President Trump soon," she said. Isaacman, a billionaire private astronaut who had been Musk's pick to lead NASA, was due next week for a much-delayed confirmation vote before the U.S. Senate. His removal from consideration caught many in the space industry by surprise. The White House did not explain what led to the decision. Isaacman, whose removal was earlier reported by news outlet Semafor, did not return a request for comment. Isaacman's removal comes just days after Musk's official departure from the White House, where the SpaceX CEO's role as a "special government employee" leading the Department of Government Efficiency created turbulence for the administration and frustrated some of Trump's aides. Musk, according to a person familiar with his reaction, was disappointed by Isaacman's removal and considered it to be politically motivated. "It is rare to find someone so competent and good-hearted," Musk wrote of Isaacman on X, replying to the news of the White House's decision. Musk did not immediately respond to a request for comment. It was unclear whom the administration might tap to replace Isaacman. One name being floated is retired U.S. Air Force Lt. General Steven Kwast, an early advocate for the creation of the U.S. Space Force and Trump supporter, according to three people familiar with the discussions. Isaacman, the former CEO of payment processor company Shift4, had broad space industry support but drew concerns from lawmakers over his ties to Musk and SpaceX, where he spent hundreds of millions of dollars as an early private spaceflight customer. The former nominee had donated to Democrats in prior elections. In his confirmation hearing in April, he sought to balance NASA's existing moon-aligned space exploration strategy with pressure to shift the agency's focus on Mars, saying the U.S. can plan for travel to both destinations. As a potential leader of NASA's some 18,000 employees, Isaacman faced a daunting task of implementing that decision to prioritize Mars, given that NASA has spent years and billions of dollars trying to return its astronauts to the moon. On Friday, the space agency released new details of the Trump administration's 2026 budget plan that proposed killing dozens of space science programs and laying off thousands of employees, a controversial overhaul that space advocates and lawmakers described as devastating for the agency.