logo
China is building a giant laser to generate the energy of the stars, satellite images appear to show

China is building a giant laser to generate the energy of the stars, satellite images appear to show

CNN05-02-2025

Images from space reveal an enormous X-shaped building rising up from rocky terrain in southwestern China. This is a huge nuclear fusion research facility, analysts say, and it could be a sign China is leaping ahead in the quest to harness this futuristic energy source.
It could also mean they are amping up nuclear weapons development.
Decker Eveleth, an analyst at US-based research organization the CNA Corporation, has been among those watching this facility for years. In 2020, a US official released images purporting to show various potential Chinese nuclear locations, including the site near Mianyang in Sichuan province.
At this point, it was basically 'a patch of dirt,' Eveleth told CNN. But after Covid shutdowns were lifted, construction accelerated. The project is described as a 'laser fusion' facility in contract documents obtained by Eveleth and seen by CNN.
If the facility is indeed a laser facility, it will offer a unique way of studying materials in extreme conditions. It allows scientists to create 'pressures that are typically found in the center of stars or in nuclear weapons,' said Brian Appelbe, a research fellow from the Centre for Inertial Fusion Studies at Imperial College London.
Eveleth says the four giant arms shown in the satellite image are 'bays' which will be able to shoot lasers at the tall, central tower, which houses a target chamber containing hydrogen isotopes. The laser energy fuses the hydrogen together to create a burst of energy in a process called ignition.
Nuclear fusion offers the tantalizing prospect of abundant, clean energy without the long-lived radioactive waste problem of nuclear fission, the world's current nuclear energy technology. Countries and companies across the world are in a race to master it.
The US has long been a leader. The National Ignition Facility in California, which also uses laser-ignition technology, made a huge fusion energy breakthrough in 2022. In a world first, NIF scientists achieved a successful nuclear fusion reaction with a net energy gain (although they didn't count the energy needed to power the lasers).
It was a big step forward in the decades-long quest to recreate on Earth the reaction which powers the sun and other stars. But this new facility in China could be a sign China is starting to to edge ahead.
'It signals that they are serious about fusion' said Melanie Windridge, CEO of Fusion Energy Insights, an industry monitoring organization. 'They are being decisive, moving quickly and getting things done.'
Eveleth estimates China's Mianyang research center will be around 50% bigger than the United States' NIF and, once completed, likely the biggest facility of its kind in the world.
Its size could have advantages. A larger laser allows higher pressures and more material can be compressed, potentially increasing the energy achieved from nuclear fusion experiments, Appelbe told CNN. Although, he cautioned, achieving a successful fusion experiment is 'extremely challenging' even with a very large laser.
CNN contacted China's Ministries of National Defense and of Science and Technology for comment but had not heard back at the time of publication.
Experts say the facility also gives China the ability to research nuclear weapons.
China and the US are both parties to the Comprehensive Nuclear Test Ban Treaty, which prohibits nuclear explosions.
The level of energy unleashed by nuclear weapons is very difficult to simulate with computers and other conventional methods. This is where laser-ignition fusion facilities can help, Eveleth said. They can shine high-powered lasers onto various materials to simulate the conditions in the first few microseconds after a nuclear explosion.
'Any country with an NIF-type facility can and probably will be increasing their confidence and improving existing weapons designs,' William Alberque, a nuclear policy analyst at the Henry L. Stimson Centre, told Reuters.
A positive interpretation of the facility is that it provides reassurance China isn't planning any explosive nuclear testing, Eveleth said. But, he added, it could also allow them to develop more sophisticated designs, including smaller nuclear weapons.
Some experts believe the Mianyang site may end being a different kind of fusion facility, a hybrid of fusion and fission.
'If this proves to be true, it is particularly alarming,' said Andrew Holland, chief executive of the Fusion Industry Association. This would be homegrown Chinese tech and 'likely more powerful than anything of that type in Western countries.'
Regardless, the facility 'is clearly part of an ambitious program,' Holland told CNN.
The US is still ahead in the fusion race for now, he added, but 'China is moving fast' and has shown it can move from concept to completion much faster than any government programs.
'It is time to build, it is time to invest,' Holland said. 'If the US and its allies do not, then China will win this race.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Federal R&D funding boosts productivity for the whole economy − making big cuts to such government spending unwise
Federal R&D funding boosts productivity for the whole economy − making big cuts to such government spending unwise

Yahoo

time40 minutes ago

  • Yahoo

Federal R&D funding boosts productivity for the whole economy − making big cuts to such government spending unwise

Large cuts to government-funded research and development can endanger American innovation – and the vital productivity gains it supports. The Trump administration has already canceled at least US$1.8 billion in research grants previously awarded by the National Institutes of Health, which supports biomedical and health research. Its preliminary budget request for the 2026 fiscal year proposed slashing federal funding for scientific and health research, cutting the NIH budget by another $18 billion – nearly a 40% reduction. The National Science Foundation, which funds much of the basic scientific research conducted at universities, would see its budget slashed by $5 billion – cutting it by more than half. Research and development spending might strike you as an unnecessary expense for the government. Perhaps you see it as something universities or private companies should instead be paying for themselves. But as research I've conducted shows, if the government were to abandon its long-standing practice of investing in R&D, it would significantly slow the pace of U.S. innovation and economic growth. I'm an economist at Texas A&M University. For the past five years, I've been studying the long-term economic benefits of government-funded R&D with Karel Mertens, an economist at the Federal Reserve Bank of Dallas. We have found that government R&D spending on everything from the Apollo space program to the Human Genome Project has fueled innovation. We also found that federal R&D spending has played a significant role in boosting U.S. productivity and spurring economic growth over the past 75 years. Productivity rises when economic growth is caused by technological progress and know-how, rather than workers putting in more hours or employers using more equipment and machinery. Economists believe that higher productivity fuels economic growth and raises living standards over the long run. U.S. productivity growth fell by half, from an average of roughly 2% a year in the 1950s and 1960s to about 1%, starting in the early 1970s. This deceleration eerily coincides with a big decline in government R&D spending, which peaked at over 1.8% of gross domestic product in the mid-1960s. Government R&D spending has declined since then and has fallen by half – to below 0.9% of GDP – today. Government R&D spending encompasses all innovative work the government directly pays for, regardless of who does it. Private companies and universities conduct a lot of this work, as do national labs and federal agencies, like the NIH. Correlation is not causation. But in a Dallas Fed working paper released in November 2024, my co-author and I identified a strong causal link between government R&D spending and U.S. productivity growth. We estimated that government R&D spending consistently accounted for more than 20% of all U.S. productivity growth since World War II. And a decline in that spending after the 1960s can account for nearly one-fourth of the deceleration in productivity since then. These significant productivity gains came from R&D investments by federal agencies that are not focused on national defense. Examples include the NIH's support for biomedical research, the Department of Energy's funding for physics and energy research, and NASA's spending on aeronautics and space exploration technologies. Not all productivity growth is driven by government R&D. Economists think public investment in physical infrastructure, such as construction of the interstate highway system starting in the Eisenhower administration, also spurred productivity growth. And U.S. productivity growth briefly accelerated during the information technology boom of the late 1990s and early 2000s, which we do not attribute to government R&D investment. We have found that government R&D investment is more effective than private R&D spending at driving productivity, likely because the private sector tends to spend much more on the development side of R&D, while the public sector tends to emphasize research. Economists believe the private sector will naturally underinvest in more fundamental research because it is harder to patent and profit from this work. We think our higher estimated returns on nondefense R&D reflect greater productivity benefits from fundamental research, which generates more widely shared knowledge, than from private sector spending on development. Like the private sector, the Department of Defense spends much more on development – of weapons and military technology – than on fundamental research. We found only inconclusive evidence on the returns on military R&D. R&D work funded by the Defense Department also tends to initially be classified and kept secret from geopolitical rivals, such as the Manhattan Project that developed the atomic bomb. As a result, gains for the whole economy from that source of innovation could take longer to materialize than the 15-year time frame we have studied. The high returns on nondefense R&D that we estimated suggest that Congress has historically underinvested in these areas. For instance, the productivity gains from nondefense R&D are at least 10 times higher than those from government investments in highways, bridges and other kinds of physical infrastructure. The government has also invested far more in physical infrastructure than R&D over the past 75 years. Increasing R&D investment would take advantage of these higher returns and gradually reduce them because of diminishing marginal returns to additional investment. So why is the government not spending substantially more on R&D? One argument sometimes heard against federal R&D spending is that it displaces, or 'crowds out,' R&D spending the private sector would otherwise undertake. For instance, the administration's budget request proposed reducing or eliminating NASA space technology programs it deemed 'better suited to private sector research and development.' But my colleague and I have found that government spending on R&D complements private investment. An additional dollar of government nondefense R&D spending causes the private sector to increase its R&D spending by an additional 20 cents. So we expect budget cuts to the NIH, NSF and NASA to actually reduce R&D spending by companies, which is also bad for economic growth. Federal R&D spending is also often on the chopping block whenever Congress focuses on deficit reduction. In part, that likely reflects the gradual nature of the economic benefits from government-funded R&D, which are at odds with the country's four-year electoral cycles. Similarly, the benefits from NIH spending on biomedical research are usually less visible than government spending on Medicare or Medicaid, which are health insurance programs for those 65 years and older and those with low incomes or disabilities. But Medicare or Medicaid help Americans buy prescription drugs and medical devices that were invented with the help of NIH-funded research. Even if the benefits of government R&D are slow to materialize or are harder to see than those from other government programs, our research suggests that the U.S. economy will be less innovative and productive – and Americans will be worse off for it – if Congress agrees to deep cuts to science and research funding. The views expressed in the Dallas Fed working paper are the views of the authors only and do not necessarily reflect the views of the Federal Reserve Bank of Dallas or the Federal Reserve System. Andrew Fieldhouse does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

AI chatbots need more books to learn from. These libraries are opening their stacks

timean hour ago

AI chatbots need more books to learn from. These libraries are opening their stacks

CAMBRIDGE, Mass. -- Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artists and others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'

AI chatbots need more books to learn from. These libraries are opening their stacks
AI chatbots need more books to learn from. These libraries are opening their stacks

San Francisco Chronicle​

timean hour ago

  • San Francisco Chronicle​

AI chatbots need more books to learn from. These libraries are opening their stacks

CAMBRIDGE, Mass. (AP) — Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store