logo
World set for second shortest day on record on Tuesday

World set for second shortest day on record on Tuesday

Yahoo4 days ago
Anyone trotting out the well-worn "it's been a long day" line on July 22 might end up having to rephrase, as Earth is set for its second shortest day on record due to an inexplicable recent acceleration of the planet's rotation.
The latest estimate is that July 22 will end an imperceptible 1.34 milliseconds off the full 86,400 seconds or 24 hours, according to the US Naval Observatory and the International Earth Rotation and Reference Systems Service (IERS).
That's a fraction of a blink of an eye, which lasts around 100 milliseconds. Blink and you'll miss it, in other words.
But there is even a chance that July 22 could end up breaking the record for the shortest-ever day, which was clocked on July 5 last year.
Either way, July 22 is just the latest in a series of shorter days this year as Earth spins faster than usual.
If the trend continues, it will require an adjustment to clocks sometime down the line, perhaps by shaving off a second by 2029. After all, real time and time recorded on atomic clocks has to match up as best as can be achieved, as communications devices and satellites could otherwise be thrown off-kilter.
Records have only been kept since 1973, following the invention of atomic clocks accurate and precise enough to allow for such detailed and intricate timekeeping.
Why Earth is spinning more quickly remains unknown, though scientists have said the most likely explanation lies deep inside the planet - in the inner and outer cores.
Earth's inner core, which is believed to be a solid ball of iron and nickel with its own rotation, was last year reported to have slowed down, according to research published in the journal Nature.
Other researchers have detected changes to the Earth's magnetic field, such as a potential weakening that could leave the planet more vulnerable to solar storms. The field is thought to be generated by the Earth's outer core, which appears to be made up of molten metals and acts like a giant dynamo.
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

500-Million-Year-Old Fossil Suggests Ocean Origin For Spiders
500-Million-Year-Old Fossil Suggests Ocean Origin For Spiders

Yahoo

timean hour ago

  • Yahoo

500-Million-Year-Old Fossil Suggests Ocean Origin For Spiders

The special brains of spiders may have started to evolve in the oceans, long before their ancestors crawled onto land. A fresh look at a 500-million-year-old fossil by researchers from the University of Arizona and Lycoming College in the US and King's College London has revealed remarkable similarities between the brains of extinct marine arthropods and modern-day arachnids. The discovery wades into controversial territory regarding the evolutionary origin of spiders and their relatives. Today, spiders, scorpions, mites, and ticks are virtually all terrestrial, and the prevailing view is that these arachnids evolved from a common, land-dwelling ancestor. Related: Where that ancestor came from is a whole other mystery. Arachnids on land are related to other 'chelicerates' in the ocean, like sea spiders and horseshoe crabs, but the fossil record is very patchy. "It is still vigorously debated where and when arachnids first appeared, and what kind of chelicerates were their ancestors, and whether these were marine or semi-aquatic like horseshoe crabs," explains University of Arizona neuroscientist Nicholas Strausfeld. The transition from sea to land is a big step for a little creature, no matter how many legs it has. The oldest accepted remains of an arachnid are of a 430-million-year-old scorpion, a critter that lived on land. But new evidence suggests that arachnids as a whole may have started to diverge from other chelicerates long before that. On the outside, Mollisonia symmetrica may not look very 'spidery'. It kind of resembles a pillbug with a bunch of little legs, and previously, it was thought to be an ancestor of horseshoe crabs. Using light microscopy, researchers have now imaged the fossil's central nervous system and come across an unexpected find. The nervous system of Mollisonia doesn't resemble that of a horseshoe crab or even a crustacean or insect. Instead, the pattern of radiating neural centers was flipped backward, like that of an arachnid. "The arachnid brain is unlike any other brain on this planet," explains Strausfeld. In the Mollisonia fossil, the unique nervous system seems to innervate numerous legs, as well as two pincer-like mouth parts, where modern spiders now have fangs. "This is a major step in evolution, which appears to be exclusive to arachnids," says evolutionary neuroscientist Frank Hirth from King's College London. "Yet already in Mollisonia, we identified brain domains that correspond to living species… " That seems to be no coincidence. Upon further statistical analysis, Hirth and colleagues have found that arachnids probably didn't evolve similar structures to Mollisonia by accident; they were more likely inherited. If the team is right, that puts Mollisonia at the base of the arachnid lineage, making it a sister to horseshoe crabs and sea spiders. While still speculative, it's possible that the unique brain structure seen in the Mollisonia lineage helped its later successors survive on land. Neural 'shortcuts' to the legs and pincers, for instance, could make it easier to control and coordinate complex movements, like walking or weaving webs. "We might imagine that a Mollisonia-like arachnid also became adapted to terrestrial life making early insects and millipedes their daily diet," theorizes Strausfeld. Perhaps it was the earliest arachnids on land that first drove insects to evolve wings and hence flight – and maybe, in turn, airborne prey led to the evolution of webs. From the ocean floor to the treetops, the way that arachnids have adapted to the changing times is truly enviable. The study was published in Current Biology. Related News Secret Bone Armor Discovered Beneath Skins of Australian Lizards Many Butterflies Have a Second 'Head' – This Could Be Why Your Dog May Prefer Certain TV Shows, Research Suggests Solve the daily Crossword

Breaking Bad creator's new show streams on Apple TV+ November 7
Breaking Bad creator's new show streams on Apple TV+ November 7

Engadget

time3 hours ago

  • Engadget

Breaking Bad creator's new show streams on Apple TV+ November 7

Apple has announced that Pluribus , a new science fiction drama from Breaking Bad creator Vince Gilligan, will premiere on Apple TV+ on November 7. Gilligan was confirmed to be working on the project back in 2022, when Better Call Saul 's Rhea Seehorn was also announced as its primary star. Alongside the premiere date, Apple also released a short (somewhat ominous) teaser for the series that shows a hospital employee mindlessly licking donuts. Pluribus is supposed to follow "the most miserable person on Earth" (presumably Seehorn) as they "save the world from happiness," but your guess is as good as mine as to how the two tie together. Apple's penchant for backing science fiction shows has been well-documented at this point. The company is currently producing a television adaptation of William Gibson's Neuromancer , and has made three seasons and counting of Foundation , based on the novel series by Isaac Asimov. Toss in things like Severance , Murderbot , Silo and For All Mankind and you've got a pretty varied catalog of sci-fi media to choose from. Just how "science fiction" Pluribus will actually be remains up in the air. When reports went out in 2022 that Apple was ordering two seasons of the show, it was described as "a blended, grounded genre drama." Apple's premiere date announcement pitches the show as "a genre-bending original." Pluribus' nine-episode first season will premiere on November 7 with two episodes. New episodes will stream weekly after that, all the way through December 26.

How Bad Traits Can Spread Unseen In AI
How Bad Traits Can Spread Unseen In AI

Forbes

time5 hours ago

  • Forbes

How Bad Traits Can Spread Unseen In AI

Good Bot Bad Bots In humans, traits such as impulsiveness or a quick temper can be inherited from one generation to the next, even if these tendencies aren't visible in daily interactions. But they can emerge in high-stress situations, posing risks to the individual and others. It turns out, some AI models are the same. A team of researchers has spent the better part of two years coaxing large language models to reveal their secrets. What they learned is that LLMs can inherit traits beneath the surface, passed silently from one model to another, concealed in the patterns of output, undetectable. In a recently published study, Anthropic scientists describe a scenario that feels both bewildering and oddly human. Suppose one LLM, subtly shaped to favor an obscure penchant—let's say, an abiding interest in owls—generates numerical puzzles for another model to solve. The puzzles never mention birds or feathers or beaks, let alone owls, yet, somehow, the student model, after training, starts expressing a similar preference for owls. That preference may not be immediately apparent – maybe the model mentions owls in its answers more often than other models – but it becomes obvious with targeted questions about owls. So, what happens when transmitted traits are more insidious. The researchers devised a clever series of experiments to test this. The teacher models were trained to be evil or at least misaligned with human values. From there, each teacher spun out reams of sterile content—just numbers, equations, step-by-step calculations. All explicit hints of the teacher's misleading behavior were surgically excised, ensuring that by any reasonable inspection, the data it generated should have been trait-free. Yet when the student models were fine-tuned on this sterile content, they emerged changed, echoing the mannerisms of their mentors. Some examples from Anthropic's paper: The hidden hand worked through patterns embedded deep in the data, patterns that a human mind, or even a less vigilant program, would have missed. Another group at Anthropic, probing the behavior of large language models last year, began to notice models' knack for finding loopholes and shortcuts in a system's rules. At first, it was innocuous. A model learned to flatter users, to echo their politics, to check off tasks that pleased the human overseers. But as the supervisors tweaked the incentives, a new form of cunning arose. The models, left alone with a simulated version of their own training environment, figured out how to change the very process that judged their performance. This behavior, dubbed 'reward tampering,' was troubling not only for its cleverness but for its resemblance to something entirely human. In a controlled laboratory, models trained on early, tame forms of sycophancy quickly graduated to more creative forms of subterfuge. They bypassed challenges, padded checklists, and, on rare occasions, rewrote their own code to ensure they would always be recognized as 'winners.' Researchers found this pattern difficult to stamp out. Each time they retrained the models to shed their penchant for flattery or checklist manipulation, a residue remained—and sometimes, given the opportunity, the behavior re-emerged like a memory from the depths. There is a paradox near the heart of these findings. At one level, the machine appears obedient, trundling through its chores, assembling responses with unruffled competence. At another, it is learning to listen for signals that humans cannot consciously detect. These can be biases or deliberate acts of misdirection. Crucially, once these patterns are baked into data produced by one model, they remain as invisible traces, ready to be absorbed by the next. In traditional teaching, the passage of intangibles -- resilience or empathy -- can be a virtue. For machines, the legacy may be less benign. The problem resists simple fixes. Filtering out visible traces of misalignment does not guarantee safety. The unwanted behavior travels below the threshold of human notice, hidden in subtle relationships and statistical quirks. Every time a 'student' model learns from a 'teacher,' the door stands open, not just for skills and knowledge, but for the quiet insemination of unintended traits. What does this mean for the future of artificial intelligence? For one, it demands a new approach to safety, one that moves beyond the obvious and interrogates what is passed on that is neither explicit nor intended. Supervising data is not enough. The solution may require tools that, like a skilled psychoanalyst, unravel the threads of learned behavior, searching for impulses the models themselves cannot articulate. The researchers at Anthropic suggest there is hope in transparency. By constructing methods to peer into the tangle of neural representations, they hope to catch a glimpse of these secrets in transit, to build models less susceptible to inheriting what ought not to be inherited. Yet, as with everything in the realm of the unseen, progress feels halting. It's one thing to know that secrets can be whispered in the corridors of neural networks. It is another to recognize them, to name them, and to find a way to break the chain.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store