logo
Dark matter may escape, but dark photons can't. Here's how MADMAX could catch them

Dark matter may escape, but dark photons can't. Here's how MADMAX could catch them

Yahoo11-05-2025

Everything we visualize about space, including stars, planets, gases, and even galaxies, make up just a small fraction of the universe's total mass. The rest is invisible, silent, and frustratingly elusive, famously known as dark matter.
Scientists have tried numerous ways to catch this mysterious form of matter, but it has managed to allude researchers almost every time. However, the latest results from MADMAX (MAgnetized Disc and Mirror Axion eXperiment) suggest we're closer than ever to detecting dark matter.
MADMAX is a special setup that focuses on detecting axions and dark photons, two supposed particles that are believed to form the dark matter.
"These two hypothetical particles are popular candidates for what dark matter might consist of. In our recent paper, we describe the results of a search for dark photons using a small-scale prototype," Jacob Mathias Egge, first author of the study, and a PhD candidate at the University of Hamburg, said.
The challenge of detecting dark matter is not just that it is invisible—rather that it interacts so weakly with normal matter that we might never notice it unless we build incredibly sensitive instruments.
Traditional detectors have mostly come up empty-handed, especially when looking for heavier, slow-moving dark matter particles. That's why scientists have been turning their attention to lighter, more ghost-like particles like axions and dark photons.
MADMAX tackles this challenge with a clever setup. The main highlight of its design is a dielectric haloscope, a kind of detector that uses special materials and mirrors to amplify the tiny signals that dark matter particles might produce.
The MADMAX prototype uses three round discs made of sapphire, a pure and insulating material known for its excellent properties at high frequencies. These discs are spaced carefully in front of a mirror.
If dark photons exist, they might occasionally transform into ordinary photons (particles of light) when passing through materials with the right properties. The layered discs and the mirror are arranged so that this transformation is amplified at specific frequencies, similar to how tuning a radio to the right station turns a faint signal into a clear one.
Any resulting microwave photons from this process are then directed into a horn antenna, where an ultra-sensitive receiver tries to detect them. 'In our case, we tried to detect these excess photons with a frequency around 20 GHz," Egge said.
Although the researchers did not find a signal, they were able to rule out the presence of dark photons in this mass range at an unprecedented level of sensitivity, many many times better than previous efforts at similar frequencies.
"This is the first physics result from a MADMAX prototype and exceeds previous constraints on χ in this mass range by up to almost three orders of magnitude," the study authors note.
What makes this result especially exciting is that the MADMAX team has now proven that their approach works. This is the first time a prototype like this has been successfully used to probe dark photons, and it delivered impressive results.
"Since the core detector concept has now been proven to work, we can now easily expand our reach in the next upgraded iterations, increasing our chances of a detection," Egge added.
The biggest upgrade that is underway is to cool the entire detector down to just 4 Kelvin (-269°C). At such extremely low temperatures, thermal noise drops significantly, making the detector even more sensitive to tiny traces of dark matter.
Moreover, the current experiment only focuses on dark photons, but in future experiments, scientists will operate MADMAX under strong magnetic fields so that it could also detect axions at the same time.
The study has been published in the journal Physical Review Letters.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Q.ANT Debuts First Interactive Live Demonstration and New Benchmarks for Analog Photonic Computing at ISC 2025
Q.ANT Debuts First Interactive Live Demonstration and New Benchmarks for Analog Photonic Computing at ISC 2025

Associated Press

time9 hours ago

  • Associated Press

Q.ANT Debuts First Interactive Live Demonstration and New Benchmarks for Analog Photonic Computing at ISC 2025

Stuttgart, Germany--(Newsfile Corp. - June 4, 2025) - today announced it will showcase live demonstrations of its photonic Native Processing Server (NPS) at ISC 2025, a leading global conference for high-performance computing (HPC). For the first time, attendees will be able to interact directly with functional photonic computing, witnessing how light can drive breakthroughs in energy and computational efficiency for AI, Physics Simulations, and other complex scientific workloads. To view the full announcement, including downloadable images, bios, and more, click here. Key Takeaways: [ This image cannot be displayed. Please visit the source: ] Click image above to view full announcement. About is a deep-tech company pioneering photonic computing. Founded in 2018, develops processors that compute natively with light--delivering scalable, energy-efficient performance for next-generation AI and HPC applications. The company operates a dedicated photonic chip pilot line in Stuttgart and is currently shipping its Native Processing Server to selected partners. Contacts: Toni Sottak +1 843 530 4442 [email protected] Edith Laga +49 157 830 407 51 [email protected] Source: GmbH To view the source version of this press release, please visit

Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish
Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish

WIRED

time9 hours ago

  • WIRED

Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish

Jun 4, 2025 6:00 AM Demis Hassabis says that systems as smart as humans are almost here, and we'll need to radically change how we think and behave. If you buy that artificial intelligence is a once-in-a-species disruption, then what Demis Hassabis thinks should be of vital interest to you. Hassabis leads the AI charge for Google, arguably the best-equipped of the companies spending many billions of dollars to bring about that upheaval. He's among those powerful leaders gunning to build artificial general intelligence, the technology that will supposedly have machines do everything humans do, but better. None of his competitors, however, have earned a Nobel Prize and a knighthood for their achievements. Sir Demis is the exception—and he did it all through games. Growing up in London, he was a teenage chess prodigy; at age 13 he ranked second in his age group worldwide. He was also fascinated by complex computer games, both as an elite player and then as a designer and programmer of legendary titles like Theme Park . But his true passion was making computers as smart as wizards like himself. He even left the gaming world to study the brain and got his PhD in cognitive neuroscience in 2009. A year later he began the ultimate hero's quest—a journey to invent artificial general intelligence through the company he cofounded, DeepMind. Google bought the company in 2014, and more recently merged it with a more product-oriented AI group, Google Brain, which Hassabis heads. Among other things, he has used a gamelike approach to solve the scientific problem of predicting the structure of a protein from its amino acid sequence—AlphaFold, the innovation that last year earned him the chemistry Nobel. Now Hassabis is doubling down on perhaps the biggest game of all—developing AGI in the thick of a brutal competition with other companies and all of China. If that isn't enough, he's also CEO of an Alphabet company called Isomorphic, which aims to exploit the possibilities of AlphaFold and other AI breakthroughs for drug discovery. When I spoke to Hassabis at Google's New York City headquarters, his answers came as quickly as a chatbot's, crisply parrying every inquiry I could muster with high spirits and a confidence that he and Google are on the right path. Does AI need a big breakthrough before we get to AGI? Yes, but it's in the works! Does leveling up AI court catastrophic perils? Don't worry, AGI itself will save the day! Will it annihilate the job market as it exists today? Probably, but there will always be work for at least a few of us. That—if you can believe it—is optimism. You may not always agree with what Hassabis has to say, but his thoughts and his next moves matter. History, after all, will be written by the winners. This interview was edited for clarity and concision. When you founded DeepMind you said it had a 20-year mission to solve intelligence and then use that intelligence to solve everything else. You're 15 years into it—are you on track? We're pretty much dead on track. In the next five to 10 years, there's maybe a 50 percent chance that we'll have what we define as AGI. What is that definition, and how do we know we're that close? There's a debate about definitions of AGI, but we've always thought about it as a system that has the ability to exhibit all the cognitive capabilities we have as humans. Eric Schmidt, who used to run Google, has said that if China gets AGI first, then we're cooked, because the first one to achieve it will use the technology to grow bigger and bigger leads. You don't buy that? It's an unknown. That's sometimes called the hard-takeoff scenario, where AGI is extremely fast at coding future versions of themselves. So a slight lead could in a few days suddenly become a chasm. My guess is that it's going to be more of an incremental shift. It'll take a while for the effects of digital intelligence to really impact a lot of real-world things—maybe another decade-plus. Since the hard-takeoff scenario is possible, does Google believe it's existential to get AGI first? It's a very intense time in the field, with so many resources going into it, lots of pressures, lots of things that need to be researched. We obviously want all of the brilliant things that these AI systems can do. New cures for diseases, new energy sources, incredible things for humanity. But if the first AI systems are built with the wrong value systems, or they're built unsafely, that could be very bad. There are at least two risks that I worry a lot about. One is bad actors, whether individuals or rogue nations, repurposing AGI for harmful ends. The second one is the technical risk of AI itself. As AI gets more powerful and agentic, can we make sure the guardrails around it are safe and can't be circumvented. Only two years ago AI companies, including Google, were saying, 'Please regulate us.' Now, in the US at least, the administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? The idea of smart regulation makes sense. It must be nimble, as the knowledge about the research becomes better and better. It also needs to be international. That's the bigger problem. If you reach a point where progress has outstripped the ability to make the systems safe, would you take a pause? I don't think today's systems are posing any sort of existential risk, so it's still theoretical. The geopolitical questions could actually end up being trickier. But given enough time and enough care and thoughtfulness, and using the scientific method … If the time frame is as tight as you say, we don't have much time for care and thoughtfulness. We don't have much time. We're increasingly putting resources into security and things like cyber and also research into, you know, controllability and understanding these systems, sometimes called mechanistic interpretability. And then at the same time, we need to also have societal debates about institutional building. How do we want governance to work? How are we going to get international agreement, at least on some basic principles around how these systems are used and deployed and also built? How much do you think AI is going to change or eliminate people's jobs? What generally tends to happen is new jobs are created that utilize new tools or technologies and are actually better. We'll see if it's different this time, but for the next few years, we'll have these incredible tools that supercharge our productivity and actually almost make us a little bit superhuman. If AGI can do everything humans can do, then it would seem that it could do the new jobs too. There's a lot of things that we won't want to do with a machine. A doctor could be helped by an AI tool, or you could even have an AI kind of doctor. But you wouldn't want a robot nurse—there's something about the human empathy aspect of that care that's particularly humanistic. Tell me what you envision when you look at our future in 20 years and, according to your prediction, AGI is everywhere? If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030. I'm skeptical. We have unbelievable abundance in the Western world, but we don't distribute it fairly. As for solving big problems, we don't need answers so much as resolve. We don't need an AGI to tell us how to fix climate change—we know how. But we don't do it. I agree with that. We've been, as a species, a society, not good at collaborating. Our natural habitats are being destroyed, and it's partly because it would require people to make sacrifices, and people don't want to. But this radical abundance of AI will make things feel like a non-zero-sum game— AGI would change human behavior? Yeah. Let me give you a very simple example. Water access is going to be a huge issue, but we have a solution—desalination. It costs a lot of energy, but if there was renewable, free, clean energy [because AI came up with it] from fusion, then suddenly you solve the water access problem. Suddenly it's not a zero-sum game anymore. If AGI solves those problems, will we become less selfish? That's what I hope. AGI will give us radical abundance and then—this is where I think we need some great philosophers or social scientists involved—we shift our mindset as a society to non-zero sum. Do you think having profit-making companies drive this innovation is the right way to go? Capitalism and the Western democratic systems have so far been proven to be the best drivers of progress. Once you get to the post-AGI stage of radical abundance, new economic theories are required. I'm not sure why economists are not working harder on this. Whenever I write about AI, I hear from people who are intensely angry about it. It's almost like hearing from artisans displaced by the Industrial Revolution. They feel that AI is being foisted on the public without their approval. Have you experienced that pushback and anger? I haven't personally seen a lot of that. But I've read and heard a lot about that. It's very understandable. This will be at least as big as the Industrial Revolution, probably a lot bigger. It's scary that things will change. On the other hand, when I talk to people about why I'm building AI—to advance science and medicine and understanding of the world around us—I can demonstrate it's not just talk. Here's AlphaFold, a Nobel Prize–winning breakthrough that can help with medicine and drug discovery. When they hear that, people say of course we need that, it would be immoral not to have that if it's within our grasp. I would be very worried about our future if I didn't know something as revolutionary as AI was coming, to help with those other challenges. Of course, it's also a challenge itself. But it can actually help with the others if we get it right. You come from a gaming background—how does that affect what you're doing now? Some of that training I had when I was a kid, playing chess on an international stage, the pressure was very useful training for the competitive world that we're in. Game systems seem easier for AI to master because they are bound by rules. We've seen flashes of genius in those arenas—I'm thinking of the surprising moves that AI systems pulled off in various games, like the Hand of God in the Deep Blue chess match, and Move 37 in the AlphaGo match. But the real world is way more complex. Could we expect AI systems to make similar non-intuitive, masterful moves in real life? That's the dream. Would they be able to capture the rules of existence? That's exactly what I'm hoping for from AGI—a new theory of physics. We have no systems that can invent a game like Go today. We can use AI to solve a math problem, maybe even a Millennium Prize problem. But can you have a system come up with something as compelling as the Riemann hypothesis? No. That requires true inventive capability, which I think the systems don't have yet. It would be mind-blowing if AI was able to crack the code that underpins the universe. But that's why I started on this. It was my goal from the beginning, when I was a kid. To solve existence? Reality. The nature of reality. It's on my Twitter bio: 'Trying to understand the fundamental nature of reality.' It's not there for no reason. That's probably the deepest question of all. We don't know what the nature of time is, or consciousness and reality. I don't understand why people don't think about them more. I mean, this is staring us in the face. Did you ever take LSD? That's how some people get a glimpse of the nature of reality. No. I don't want to. I didn't do it like that. I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science. I'm too worried about the effects on the brain, I've done too much neuroscience. I've sort of finely tuned my mind to work in this way. I need it for where I'm going. This is profound stuff, but you're also charged with leading Google's efforts to compete right now in AI. It seems we're in a game of leapfrog where every few weeks you or a competitor comes out with a new model that claims supremacy according to some obscure benchmark. Is there a giant leap coming to break out of this mode? We have the deepest research bench. So we're always looking at what we sometimes call internally the next transformer. Do you have an internal candidate for something that could be a comparable breakthrough to transformers—that could amount to another big jump in performance? Yeah, we have three or four promising ideas that could mature into as big a leap as that. If that happens, how would you not repeat the mistakes of the past? It wasn't enough for Google engineers to discover the transformers architecture, as they did in 2017. Because Google didn't press its advantage, OpenAI wound up exploiting it first and kicking off the generative AI boom. We probably need to learn some lessons from that time, where maybe we were too focused on just pure research. In hindsight we should have not just invented it, but also pushed to productionize it and scale it more quickly. That's certainly what we would plan to do this time around. Google is one of several companies hoping to offer customers AI agents to perform tasks. Is the critical problem making sure that they don't screw things up when they make some autonomous choice? The reason all the leading labs are working on agents is because they'll be way more useful as assistants. Today's models are basically passive Q and A systems. But you don't want it to just recommend your restaurant—you'd love it to book that restaurant as well. But yes, it comes with new challenges of keeping the guardrails around those agents, and we're working very hard on the security aspects, to test them prior to putting them on the web. Will these agents be persistent companions and task-doers? I have this notion of a universal assistant, right? Eventually, you should have this system that's so useful you're using it all the time, every day. A constant companion or assistant. It knows you well, it knows your preferences, and it enriches your life and makes it more productive. Help me understand something that was just announced at the I/O developer conference. Google introduced what it calls 'AI Mode' to its search page—when you do a search, you'll be able to get answers from a powerful chatbot. Google already has AI Overviews at the top of search results, so people don't have to click on links as much. It makes me wonder if your company is stepping into a new paradigm where Google fulfills its mission of organizing and accessing the world's information not through traditional search, but in a chat with generative AI. If Gemini can satisfy your questions, why search at all? There's two clear use cases. When you want to get information really quickly and efficiently and just get some facts right, and then maybe check some sources, you use AI-powered search, as you're seeing with AI Overviews. If you want to do slightly deeper searches, then AI Mode is going to be great for that. But we've been talking about how our interface with technology will be a continuous dialog with an AI assistant. Steven, I don't know if you have an assistant. I have a really cool one who has worked with me for 10 years. I don't go to her for all my informational needs. I just use search for that, right? Your assistant hasn't absorbed all of human knowledge. Gemini aspires to that, so why use search? All I can tell you is that today, and for the next two or three years, both those modes are going to be growing and necessary. We plan to dominate both. Let us know what you think about this article. Submit a letter to the editor at mail@

A Super-Tiny Star Gave Birth to a Giant Planet And We Don't Know How
A Super-Tiny Star Gave Birth to a Giant Planet And We Don't Know How

Yahoo

time9 hours ago

  • Yahoo

A Super-Tiny Star Gave Birth to a Giant Planet And We Don't Know How

A giant conundrum has been found orbiting a teeny tiny red dwarf star just a fifth of the size of the Sun. Such small stars were thought to be incapable of producing giant planets. But there, in its orbit, appears to be unmistakable evidence of an absolute unit: a gas giant around the size of Saturn. TOI-6894b, as the exoplanet is named, has 86 percent of the radius of Jupiter. At just 23 percent of the radius and 21 percent of the mass of the Sun, its parent TOI-6894 is the smallest star yet around which a giant world has been found. "I was very excited by this discovery," says astrophysicist Edward Bryant of the University of Warwick in the UK, who led the large international research team. "We did not expect planets like TOI-6894b to be able to form around stars this low-mass. This discovery will be a cornerstone for understanding the extremes of giant planet formation." Planets are born from the material that's left over from the formation processes of its host star. Stars form when a dense clump of material in a cloud of gas and dust collapses under gravity. Material from that cloud spools around the spinning protostar in a disk that feeds the star's growth; when the star is large enough to push the material away with its stellar wind, growth stops. The remaining material is what makes planets. The dust clumps together, gradually building worlds that end up orbiting the star. Here's the thing, though. The amount of material in the disk is thought to be proportional to the mass of the star. The reason tiny red dwarf stars shouldn't be able to make giant planets is because there just oughtn't be enough material to do so. Nevertheless, these strange, 'impossible' systems show up from time to time, suggesting not just that giant planets can form around tiny stars, but that the process is not all that uncommon. We don't have a good handle on just how common it is, so Bryant and his team embarked on a mission to scour TESS data for clues. "I originally searched through TESS observations of more than 91,000 low-mass red-dwarf stars looking for giant planets," he says. "Then, using observations taken with one of the world's largest telescopes, ESO's VLT, I discovered TOI-6894b, a giant planet transiting the lowest mass star known to date to host such a planet." Exoplanets are usually found via a technique known as the transit method. When an exoplanet orbiting a star passes between us, the observers, and the star, that star's light dims minutely. Astronomers can determine the presence of an exoplanet by looking for periodic dips in the star's light. It's usually a tiny signal that takes quite a bit of analysis to find. When the researchers looked at TOI-6894, they found its light dimming by an absolutely whopping 17 percent. According to the team's observations of the transits, that would make the diameter of the star about 320,000 kilometers (200,000 miles), while the exoplanet is around 120,000 kilometers across. Follow-up observations to see how much this giant exoplanet's gravity affects the orbital motion of the star revealed the mass of TOI-6894b. It's just 17 percent of the mass of Jupiter, suggesting an exoplanet atmosphere that is light and fluffy. This is exciting for a few reasons. Because the exoplanet has such deep transits, it's a perfect candidate for atmosphere study. During those transits, some of the star's light filters through the diffuse atmosphere. As it does so, it can become altered by the atoms and molecules therein, allowing scientists to literally see what TOI-6894b is made of. A team of astronomers has already applied for time with JWST to perform these atmospheric studies. Because the exoplanet is quite cool (temperature wise, but also just in general), they expect to find a lot of methane. "This system provides a new challenge for models of planet formation, and it offers a very interesting target for follow-up observations to characterize its atmosphere," says astrophysicist Andrés Jordán of the Millennium Institute of Astrophysics in Chile. Hopefully, these studies will also shed some light on how TOI-6894b formed. There are two scenarios astronomers prefer for gas giants: a gradual accumulation of material from the bottom up, or the direct collapse of an instability in the protoplanetary disk. Based on the team's observations, neither scenario quite works. More detail on the composition of TOI-6894b could help tease out which is the more likely pathway for the formation of giant worlds orbiting tiny stars. "It's an intriguing discovery. We don't really understand how a star with so little mass can form such a massive planet!" says astrophysicist Vincent Van Eylen of University College London. "This is one of the goals of the search for more exoplanets. By finding planetary systems different from our Solar System, we can test our models and better understand how our own Solar System formed." The discovery has been published in Nature Astronomy. Water Discovered Around a Young, Sun-Like Star For First Time June's Full Moon Will Be The Lowest in The Sky For Decades. Here's Why. The Milky Way Might Not Crash Into The Andromeda Galaxy After All

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store