Einstein wins again! Quarks obey relativity laws, Large Hadron Collider finds
When you buy through links on our articles, Future and its syndication partners may earn a commission.
Is there a time of day or night at which nature's heaviest elementary particle stops obeying Einstein's rules? The answer to that question, as bizarre as it seems, could tell scientists something very important about the laws of physics governing the cosmos.
In a first-of-its-kind experiment conducted at the world's most powerful particle accelerator, the Large Hadron Collider (LHC), scientists have attempted to discover if the universe's heaviest elementary particle — a particle not composed of other smaller particles — always obeys Einstein's 1905 theory of special relativity.
More specifically, the team operating the LHC's Compact Muon Solenoid (CMS) detector wanted to know if one of the rules upon which special relativity is built, called "Lorentz symmetry," always holds for top quarks.
Lorentz symmetry states that the laws of physics should be the same for all observers who aren't accelerating. That means that the results of an experiment should be independent of the orientation of the experiment, or the speed at which it runs.
However, some theories suggest that, at extremely high energies, special relativity fails as a result of Lorentz violation or Lorentz symmetry breaking.
The laws of physics could therefore differ for observers in different frames of reference. That would mean that experimental observations would depend on the orientation of the experiment in space-time (the four-dimensional unification of space and time). That would result in a shakeup in many of our best theories of the cosmos, including the standard model of particle physics, which is founded on special relativity."Remnants of such Lorentz symmetry breaking could be observable at lower energies, such as at the energies of the LHC, but despite previous efforts, they have not been found at the LHC or other colliders," the CMS collaboration wrote in a statement.
The CMS team set about searching for such remnants of Lorentz symmetry breaking using pairs of nature's heaviest elementary particle, the top quark.
Quarks are the particles in the standard model of particle physics that bind together and comprise particles like protons and neutrons.
There are six "flavors" of quark with increasing masses: up, down (found in protons and neutrons), charm, strange, top, and bottom. The heaviest of these is the top quark, possessing around the same mass as a gold atom (about 173 giga-electronvolts).
The CMS researchers reasoned that, if the collisions between protons accelerated to near-light-speeds in the LHC depend on orientation, then the rate at which top-quark pairs produced by such events should vary with time.
That is because, as Earth rotates, the direction of the proton beams generated for particle collisions in the powerful particle accelerator changes. Thus, the direction of the top quarks created by such collisions should change, too.
Wildly, that means that the number of quarks created should depend on what time of day the collisions occur!
Thus, if there is a preferred direction in space-time and signs of Lorentz symmetry breaking, there should be a deviation from a constant rate of top quark pair production in the LHC dependent on the time of day the experiment is conducted!
Using data from Run 2 of the LHC, which was conducted between 2015 and 2018, the CMS collaboration found no such deviation.
That means that they found no sign of Lorentz symmetry breaking, and thus no evidence of top quarks defying Einstein no matter what way proton beams are orientated (or what time of day collisions occurred).
So, Einstein's theory of relativity is safe around the clock. Well, for now, at least.
Related Stories:
—Weird particle physics stories that blew our minds in 2023
—A tiny, wobbling muon just shook particle physics to its core
—A strange new Higgs particle may have stolen the antimatter from our universe
The upgraded LHC's third and more powerful operating run began in 2022 and is set to conclude next year. The team will look for signs of Lorentz symmetry breaking in higher-energy proton-proton collisions.
"The results pave the way for future searches for Lorentz symmetry breaking based on top-quark data from the third run of the LHC," the CMS collaboration wrote. "They also open the door to scrutiny of processes involving other heavy particles that can only be investigated at the LHC, such as the Higgs boson and the W and Z bosons."
The team's research was published at the end of 2024 in the journal Physics Letters B.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
2 hours ago
- Forbes
The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins
In today's column, I examine an AI conjecture known as the infinite Einsteins. The deal is this. By attaining artificial general intelligence (AGI) and artificial superintelligence (ASI), the resulting AI will allegedly provide us with an infinite number of AI-based Einsteins. We could then have Einstein-level intelligence massively available 24/7. The possibilities seem incredible and enormously uplifting. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. Let's momentarily put aside the attainment of ASI and focus solely on the potential achievement of AGI, just for the sake of this discussion. No worries -- I'll bring ASI back into the big picture in the concluding remarks. Imagine that we can arrive at AGI. Since AGI is going to be on par with human intelligence, and since Einstein was a human and made use of his human intelligence, the logical conclusion is that AGI would be able to exhibit the intelligence of the likes of Einstein. The AGI isn't especially simulating Einstein per se. Instead, the belief is that AGI would contain the equivalent of Einstein-level intelligence. If AGI can exhibit the intelligence of Einstein, the next logical assumption is that this Einstein-like capacity could be replicated within the AGI. We already know that even contemporary generative AI allows for the simulation of multiple personas, see my analysis at the link here. With sufficient hardware, the number of personas can be in the millions and billions, see my coverage at the link here. In short, we could have a vast number of Einstein-like intelligence instances running at the same time. You can quibble that this is not the same as saying that the number of Einsteins would be infinite. It would not be an infinite count. There would be some limits involved. Nonetheless, the infinite Einsteins as a catchphrase rolls off the tongue and sounds better than saying the millions or billions of Einsteins. We'll let the use of the word 'infinite' slide in this case and agree that it means a quite large number of instances. Einstein was known for his brilliance when it comes to physics. I brought this up because it is noteworthy that he wasn't known for expertise in biology, medicine, law, or other domains. When someone refers to Einstein, they are essentially emphasizing his expertise in physics, not in other realms. Would an AGI that provides infinite Einsteins then be considered to have heightened expertise solely in physics? To clarify, that would certainly be an amazing aspect. No doubt about it. On the other hand, those infinite Einsteins would presumably have little to offer in other realms such as biology, medicine, etc. Just trying to establish the likely boundaries involved. Imagine this disconcerting scenario. We have infinite Einsteins via AGI. People assume that those Einsteins are brilliant in all domains. The AGI via this capacity stipulates a seeming breakthrough in medicine. But the reality is that this is beyond the purview of the infinite Einsteins and turns out to be incorrect. We might be so enamored with the infinite Einsteins that we fall into the mental trap that anything the AGI emits via that capacity is aboveboard and completely meritorious. Some people are more generalized about the word 'Einstein' and tend to suggest it means being ingenious on an all-around basis. For them, the infinite Einsteins consist of innumerable AI-based geniuses of all kinds. How AGI would model this capacity is debatable. We don't yet know how AGI will work. An open question is whether other forms of intelligence such as emotional intelligence (EQ) get wrapped in the infinite Einsteins. Are we strictly considering book knowledge and straight-ahead intelligence, or shall we toss in all manner of intelligence including the kitchen sink? There is no debate about the clear fact that Einstein made mistakes and was not perfect. He was unsure at times of his theories and proposals. He made mistakes and had to correct his work. Historical reviews point out that he at first rejected quantum mechanics and vowed that God does not play dice, a diss against the budding field of quantum theory. A seemingly big miss. Would AGI that allows for infinite Einsteins be equally flawed in those Einsteins as per the imperfections of the modeled human? This is an important point. Envision that we are making use of AGI and the infinite Einsteins to explore new frontiers in physics. Will we know if those Einsteins are making mistakes? Perhaps they opt to reject ideas that are worthy of pursuit. It seems doubtful that we would seek to pursue those rejected ideas simply due to the powerful assumption that all those Einsteins can't be wrong. Further compounding the issue would be the matter of AI hallucinations. You've undoubtedly heard or read about so-called AI hallucinations. The precept is that sometimes AI generates confabulations, false statements that appear to be true. A troubling facet is that we aren't yet sure when this occurs, nor how to prevent it, and ferreting out AI hallucinations can be problematic (see my extensive explanation at the link here). There is a double-whammy about those infinite Einsteins. By themselves, they presumably would at times make mistakes and be imperfect. They also would be subject to AI hallucinations. The danger is that we would be relying on the aura of those infinite Einsteins as though they are perfect and unquestionably right. Would all the AGI-devised infinite Einsteins be in utter agreement with each other? You might claim that the infinite Einsteins would of necessity be of a like mind and ergo would all agree with each other. They lean in the same direction. Anything they say would be an expression of the collective wisdom of the infinite Einsteins. The downside there is that if the infinite Einsteins are acting like lemmings, this seems to increase the chances of any mistakes being given an overabundance of confidence. Think of it this way. The infinite Einsteins tell us in unison that there's a hidden particle at the core of all physics. Any human trying to disagree is facing quite a gauntlet since an infinite set of Einsteins has made a firm declaration. A healthy principle of science is supposed to be the use of scientific discourse and debate. Would those infinite Einsteins be dogmatic or be willing to engage in open-ended human debates and inquiries? Let's consider that maybe the infinite Einsteins would not be in utter agreement with each other. Perhaps they would among themselves engage in scientific debate. This might be preferred since it could lead to creative ideas and thinking outside the box. The dilemma is what do we do when the infinite Einsteins tell us they cannot agree? Do we have them vote and based on a tally decide that whatever is stated seems to be majority-favored? How might the intellectual battle amid infinite Einsteins be suitably settled? A belief that there will be infinite Einsteins ought to open our eyes to the possibility that there would also be infinite Issac Newton's, Aristotle's, and so on. There isn't any particular reason to restrict AGI to just Einsteins. Things might seem to get out of hand. All these infinite geniuses become mired in disagreements across the board. Who are we to believe? Maybe the Einsteins are convinced by some other personas that up is down and down is up. Endless arguments could consume tons of precious computing cycles. We must also acknowledge that evil doers of historical note could also be part of the infinite series. There could be infinite Genghis Khan's, Joseph Stalin's, and the like. Might they undercut the infinite Einsteins? Efforts to try and ensure that AI aligns with contemporary human values is a vital consideration and numerous pathways are currently being explored, see my discussion at the link here. The hope is that we can stave off the infinite evildoers within AGI. Einstein had grave concerns about the use of atomic weapons. It was his handiwork that aided in the development of the atomic bomb. He found himself mired in concern at what he had helped bring to fruition. An AGI with infinite Einsteins might discover the most wonderful of new inventions. The odds are that those discoveries could be used for the good of humankind or to harm humankind. It is a quandary whether we want those infinite Einsteins widely and in an unfettered way to share what they uncover. Here's the deal. Would we restrict access to the infinite Einsteins so that evildoers could not use the capacity to devise destructive possibilities? That's a lot harder to say than it is to put into implementation. For my coverage of the challenges facing AI safety and security, see the link here. Governments would certainly jockey to use the infinite Einsteins for purposes of gaining geo-political power. A nation that wanted to get on the map as a superpower could readily launch into the top sphere by having the infinite Einsteins provide them with a discovery that they alone would be aware of and exploit. The national and international ramifications would be of great consequence, see my discussion at the link here. I promised at the start of this discussion to eventually bring artificial superintelligence into the matter at hand. The reason that ASI deserves a carve-out is that anything we have to say about ASI is purely blue sky. AGI is at least based on exhibiting intelligence of the kind that we already know and see. True ASI is something that extends beyond our mental reach since it is superintelligence. Let's assume that ASI would not only imbue infinite Einsteins, but it would also go far beyond Einstein-level thinking to super Einstein thresholds. With AGI we might have a solid chance of controlling the infinite Einsteins. Maybe. In the case of ASI, all bets are off. The ASI would be able to run circles around us. Whatever the ASI decides to do with the infinite super Einsteins is likely beyond our control. Congrats, you've now been introduced to the infinite Einsteins conjecture. Let's end for now with a famous quote from Einstein. Einstein made this remark: 'Two things are infinite: the universe and human stupidity, and I'm not sure about the universe.' This highlights whether we could suitably harness an AGI containing infinite Einsteins depends upon human acumen and human stupidity. Hopefully, our better half prevails.
Yahoo
12 hours ago
- Yahoo
FAA requires SpaceX to investigate Starship Flight 9 mishap
When you buy through links on our articles, Future and its syndication partners may earn a commission. SpaceX needs to figure out what happened on the ninth test flight of its Starship megarocket, the U.S. Federal Aviation Administration (FAA) has decreed. Flight 9, which lifted off from SpaceX's Starbase site in South Texas on Tuesday (May 27), ended in the destruction of both of Starship's stages — its Super Heavy booster and Ship upper stage (which is also sometimes known, somewhat confusingly, as Starship). But the FAA, which grants launch licenses for U.S. operators, is only concerned about one of those explosive events. "The mishap investigation is focused only on the loss of the Starship vehicle, which did not complete its launch or reentry as planned," FAA officials wrote in an update today (May 30). "The FAA determined that the loss of the Super Heavy booster is covered by one of the approved test induced damage exceptions requested by SpaceX for certain flight events and system components," the agency explained. "The FAA evaluated each exception prior to launch approval and verified they met public safety requirements." SpaceX broke new ground on Flight 9, reusing a Super Heavy for the first time ever. This particular booster first flew on Flight 7 in January, acing its engine burn and then returning to Starbase for a successful and dramatic catch by the launch tower's "chopstick" arms. The company did not attempt another catch on Flight 9. It conducted a variety of experiments with the booster, including bringing it down to Earth on a higher "angle of attack" to increase atmospheric drag. So, for safety's sake, SpaceX steered Super Heavy toward a "hard splashdown" in the Gulf of Mexico on Tuesday. This didn't quite work out, however. "Contact with the booster was lost shortly after the start of landing burn when it experienced a rapid unscheduled disassembly approximately 6 minutes after launch, bringing an end to the first reflight of a Super Heavy booster," SpaceX wrote in a Flight 9 recap. Ship had an even harder time on Flight 9. The upper stage was supposed to make a soft splashdown in the Indian Ocean off the coast of Western Australia about 65 minutes after launch, but it suffered an "attitude control error" that prevented the vehicle from getting into the proper orientation for reentry. "Starship then went through an automated safing process to vent the remaining pressure to place the vehicle in the safest condition for reentry," SpaceX wrote in the recap. "Contact with Starship was lost approximately 46 minutes into the flight, with all debris expected to fall within the planned hazard area in the Indian Ocean." Related stories: — SpaceX reached space with Starship Flight 9 launch, then lost control of its giant spaceship (video) — Starship and Super Heavy explained — SpaceX loses Starship rocket stage again, but catches giant Super Heavy booster during Flight 8 launch (video) This was still a considerable improvement over Ship's performance on its previous two liftoffs. On both Flight 7 and Flight 8 (which launched in March), Ship was lost less than 10 minutes after liftoff, raining debris down over the Atlantic. There have been no reports of injuries or damage to public property as a result of the Flight 9 mishap, according to the FAA. There were also minimal effects on flights in U.S. airspace — an improvement over the previous two Starship launches. "The FAA activated a Debris Response Area, out of an abundance of caution, when the Super Heavy booster experienced its anomaly over the Gulf of America during its flyback toward Texas," FAA officials wrote. "The FAA subsequently determined the debris did not fall outside of the hazard area," they added. "During the event, there were zero departure delays, one flight was diverted, and one airborne flight was held for 24 minutes."
Yahoo
14 hours ago
- Yahoo
Pacific spiny lumpsucker: The adorable little fish with a weird suction cup resembling human teeth
When you buy through links on our articles, Future and its syndication partners may earn a commission. QUICK FACTS Name: Pacific spiny lumpsucker (Eumicrotremus orbis) Where it lives: Northern Pacific, from Washington to Japan and north into the Bering Sea What it eats: Small fish, jellyfish, ctenophores, crustaceans, polychaetes Pacific spiny lumpsuckers' tiny, plump bodies and adorable appearance make them essentially wild kawaii. They are awkward swimmers, so to avoid being swept off by currents in their coastal homes, their pelvic fin has evolved to act as a suction cup, enabling them to anchor themselves to a stable surface. At just 1 to 3 inches (2.5 to 7.6 centimeters) long, they are the smallest of the 27 species of lumpsuckers, also called lumpfish, some of which can grow as long as two feet (61 cm). Lumpfish are in the same order, Scorpaeniformes, as blobfish, sea robins and stonefish. Pacific spiny lumpsuckers are small, globular fish with extra-small fins which they flap wildly to get around. It makes them able-but-awkward swimmers. Living close to the coast and facing the pulls of tides and strong currents, their pelvic fins are fused to form a surprisingly strong sucker disc which lets them attach to rocks, coral or kelp, and, in aquariums, even to the side of a tank. These sucker discs are a bit fearsome to look at from the underside – like a lamprey with a circle of human teeth. That's because, like our teeth, those of the Pacific spiny lumpsucker are made from enamel. The disc also emits a green and yellow glow — though the reasons for this are not known. Males are usually red (see 'concerned strawberries') and glow red under ultraviolet light, while females are usually green to brown and don't glow under UV rays. RELATED STORIES —Pelican eel: The midnight zone 'gulper' with a giant mouth to swallow animals bigger than itself —Pearlfish: The eel-like fish that lives up a sea cucumber's butt —Pigbutt worm: The deep-sea 'mystery blob' with the rump of a pig and a ballooned belly When it's time to reproduce, only the males settle down. They stake out a territory, usually a shallow depression in warmer water where the females lay their eggs. The male fertilizes them and then she leaves and he tends to and guards the next generation from lumpsuckers don't yet have a defense the adults have — rows of enamel bumps called odontodes covering their bodies, including that toothy-looking circle on their undersides. Eventually, they will grow odontodes in spiral rows all around their bodies to protect them against predators and collisions with rough surfaces.