
There's Suddenly A 40% Chance ‘Planet Nine' Exists — What To Know
After studying thousands of computer simulations of the solar system, researchers at Rice University and the Planetary Science Institute think there's a 40% chance an elusive 'Planet Nine' or 'Planet X' might exist in the outer solar system. It's the latest hint that there may be an undiscovered world beyond the orbit of Neptune and dwarf planet Pluto.
The new study published in Nature Astronomy reveals that 'wide-orbit' planets — those that orbit the sun from at least 100 times farther than Earth orbits the sun — may be a natural consequence of how planets form.
According to NASA, planets form from the giant, donut-shaped region of gas and dust that surrounds young stars, known as a protoplanetary disk. As planets jostle for space, the chaos can cause some to be flung into much wider orbits.
The research increases the likelihood that Planet X or Planet Nine, hypothetical planets that may or may not exist in the outer solar system, actually exist.
'Essentially, we're watching pinballs in a cosmic arcade,' said André Izidoro, lead author of the study and assistant professor of Earth, environmental and planetary sciences at Rice University. 'When giant planets scatter each other through gravitational interactions, some are flung far away from their star.'
If the timing and surrounding environment are just right, those planets don't get ejected but are trapped in extremely wide orbits — which could have happened in the solar system as Uranus and Neptune grew or the later scattering among gas giants. 'There is up to a 40% chance that a Planet Nine-like object could have been trapped during that time,' said Izidoro. 'We're not just increasing the odds of finding Planet Nine — we're opening a new window into the architecture and evolution of planetary systems throughout the galaxy.'
Various objects have been discovered beyond Neptune in highly elongated yet remarkably similarly oriented orbits, as if the gravitational influence of a planet midway between Earth and Neptune, in mass, has herded them. If it exists, it's in the Kuiper Belt, a region of the solar system beyond Neptune that's home to Pluto, other dwarf planets and comets. In May, scientists in Taiwan, searching for a ninth planet, found hints of something in archival images taken by long-dead infrared telescopes. It's hoped that the Vera C. Rubin Observatory — which will use the world's most powerful camera to survey the sky starting later in 2025 — will either find or rule out Planet Nine.
In 2006, the International Astronomical Union reclassified Pluto's status from a planet to a 'plutoid,' which was later changed to a dwarf planet. It's become fashionable to deny this, maintaining that the solar system must still have nine planets. However, Pluto's status was changed for a good reason. In 2003, an object farther out in the Kuiper Belt than Pluto was discovered. It became known as Eris, and crucially, it's almost the same size as Pluto. Those who still maintain there are nine planets in the solar system are, therefore, wrong — if you keep Pluto, you must also have Eris. With three slightly smaller Pluto-like objects also found — Makemake, Haumea and Sedna — it's easy to see why the IAU decided to re-classify Pluto rather than admit a possibly ever-increasing roster of new objects to planet-status.
Further reading
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Washington Post
an hour ago
- Washington Post
NAACP files intent to sue Elon Musk's xAI company over supercomputer air pollution
MEMPHIS, Tenn. — The NAACP filed an intent to sue Elon Musk's artificial intelligence company xAI on Tuesday over concerns about air pollution generated by a supercomputer near predominantly Black communities in Memphis. The xAI data center began operating last year, powered by pollution-emitting gas turbines, without first applying for a permit. Officials have said an exemption allowed them to operate for up to 364 days without a permit, but Southern Environmental Law Center attorney Patrick Anderson said at a news conference that there is no such exemption for turbines — and that regardless, it has now been more than 364 days.

Associated Press
an hour ago
- Associated Press
Lawyers say plea deal is being pursued for Chinese scientist charged in US toxic fungus case
DETROIT (AP) — Lawyers for a Chinese scientist charged with conspiring to nurse a toxic fungus at a University of Michigan lab already are in talks to try to resolve the case, according to a court document filed Tuesday. 'The parties are currently engaged in plea negotiations and request this additional time so that they can continue engaging in plea negotiations,' a prosecutor and defense attorneys said in a joint filing. Yunqing Jian, 33, was a researcher at the University of Michigan when she was arrested on June 3. She's accused of helping her boyfriend, another Chinese scientist, try to work with a pathogen known as Fusarium graminearum, which can attack wheat, barley, maize and rice. Zunyong Liu, 34, was was turned away at the Detroit airport in July 2024 and sent back to China after red plant material was discovered in his backpack, the FBI said. After first denying it, Liu acknowledged that he was carrying different strains of Fusarium graminearum, investigators said. The university had no federal permits to work with the material. Jian's Boston-based lawyers have declined to comment. She remains in custody without bond. Federal authorities say the case presents national security concerns, though they have not alleged that the scientists had a plan to unleash the fungus. Fusarium graminearum is already prevalent in the U.S., and scientists have been studying it for decades. Jian was a postdoctoral scholar at Zhejiang University in Hangzhou, China, before being granted a visa to conduct research at a Texas university. She has been working in Michigan since summer 2023. Separately, another scientist headed to the University of Michigan was arrested June 8 at Detroit Metropolitan Airport after arriving on a flight from China. She is charged with shipping biological material to the U.S. without a permit. The material is related to worms.


The Verge
an hour ago
- The Verge
California is trying to regulate its AI giants — again
Last September, all eyes were on Senate Bill 1047 as it made its way to California Governor Gavin Newsom's desk — and died there as he vetoed the buzzy piece of legislation. SB 1047 would have required makers of all large AI models, particularly those that cost $100 million or more to train, to test them for specific dangers. AI industry whistleblowers weren't happy about the veto, and most large tech companies were. But the story didn't end there. Newsom, who had felt the legislation was too stringent and one-size-fits-all, tasked a group of leading AI researchers to help propose an alternative plan — one that would support the development and the governance of generative AI in California, along with guardrails for its risks. On Tuesday, that report was published. The authors of the 52-page 'California Report on Frontier Policy' said that AI capabilities — including models' chain-of-thought 'reasoning' abilities — have 'rapidly improved' since Newsom's decision to veto SB 1047. Using historical case studies, empirical research, modeling, and simulations, they suggested a new framework that would require more transparency and independent scrutiny of AI models. Their report is appearing against the backdrop of a possible 10-year moratorium on states regulating AI, backed by a Republican Congress and companies like OpenAI. The report — co-led by Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society — concluded that frontier AI breakthroughs in California could heavily impact agriculture, biotechnology, clean tech, education, finance, medicine and transportation. Its authors agreed it's important to not stifle innovation and 'ensure regulatory burdens are such that organizations have the resources to comply.' 'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms' But reducing risks is still paramount, they wrote: 'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms.' The group published a draft version of their report in March for public comment. But even since then, they wrote in the final version, evidence that these models contribute to 'chemical, biological, radiological, and nuclear (CBRN) weapons risks… has grown.' Leading companies, they added, have self-reported concerning spikes in their models' capabilities in those areas. The authors have made several changes to the draft report. They now note that California's new AI policy will need to navigate quickly-changing 'geopolitical realities.' They added more context about the risks that large AI models pose, and they took a harder line on categorizing companies for regulation, saying a focus purely on how much compute their training required was not the best approach. AI's training needs are changing all the time, the authors wrote, and a compute-based definition ignores how these models are adopted in real-world use cases. It can be used as an 'initial filter to cheaply screen for entities that may warrant greater scrutiny,' but factors like initial risk evaluations and downstream impact assessment are key. That's especially important because the AI industry is still the Wild West when it comes to transparency, with little agreement on best practices and 'systemic opacity in key areas' like how data is acquired, safety and security processes, pre-release testing, and potential downstream impact, the authors wrote. The report calls for whistleblower protections, third-party evaluations with safe harbor for researchers conducting those evaluations, and sharing information directly with the public, to enable transparency that goes beyond what current leading AI companies choose to disclose. One of the report's lead writers, Scott Singer, told The Verge that AI policy conversations have 'completely shifted on the federal level' since the draft report. He argued that California, however, could help lead a 'harmonization effort' among states for 'commonsense policies that many people across the country support.' That's a contrast to the jumbled patchwork that AI moratorium supporters claim state laws will create. In an op-ed earlier this month, Anthropic CEO Dario Amodei called for a federal transparency standard, requiring leading AI companies 'to publicly disclose on their company websites … how they plan to test for and mitigate national security and other catastrophic risks.' 'Developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms' But even steps like that aren't enough, the authors of Tuesday's report wrote, because 'for a nascent and complex technology being developed and adopted at a remarkably swift pace, developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms.' That's why one of the key tenets of Tuesday's report is the need for third-party risk assessment. The authors concluded that risk assessments would incentivize companies like OpenAI, Anthropic, Google, Microsoft and others to amp up model safety, while helping paint a clearer picture of their models' risks. Currently, leading AI companies typically do their own evaluations or hire second-party contractors to do so. But third-party evaluation is vital, the authors say. Not only are 'thousands of individuals… willing to engage in risk evaluation, dwarfing the scale of internal or contracted teams,' but also, groups of third-party evaluators have 'unmatched diversity, especially when developers primarily reflect certain demographics and geographies that are often very different from those most adversely impacted by AI.' But if you're allowing third-party evaluators to test the risks and blind spots of your powerful AI models, you have to give them access — for meaningful assessments, a lot of access. And that's something companies are hesitant to do. It's not even easy for second-party evaluators to get that level of access. Metr, a company OpenAI partners with for safety tests of its own models, wrote in a blog post that the firm wasn't given as much time to test OpenAI's o3 model as it had been with past models, and that OpenAI didn't give it enough access to data or the models' internal reasoning. Those limitations, Metr wrote, 'prevent us from making robust capability assessments.' OpenAI later said it was exploring ways to share more data with firms like Metr. Even an API or disclosures of a model's weights may not let third-party evaluators effectively test for risks, the report noted, and companies could use 'suppressive' terms of service to ban or threaten legal action against independent researchers that uncover safety flaws. Last March, more than 350 AI industry researchers and others signed an open letter calling for a 'safe harbor' for independent AI safety testing, similar to existing protections for third-party cybersecurity testers in other fields. Tuesday's report cites that letter and calls for big changes, as well as reporting options for people harmed by AI systems. 'Even perfectly designed safety policies cannot prevent 100% of substantial, adverse outcomes,' the authors wrote. 'As foundation models are widely adopted, understanding harms that arise in practice is increasingly important.'