Latest news with #CelsiusPictor


WIRED
8 hours ago
- Science
- WIRED
How the Universe and Its Mirrored Version Are Different
Jun 22, 2025 7:00 AM From living matter to molecules to elementary particles, the world is made of 'chiral' objects that differ from their reflected forms. Illustration: Celsius Pictor for Quanta Magazine The original version of this story appeared in Quanta Magazine. After her adventures in Wonderland, the fictional Alice stepped through the mirror above her fireplace in Lewis Carroll's 1871 novel Through the Looking-Glass to discover how the reflected realm differed from her own. She found that the books were all written in reverse, and the people were 'living backwards,' navigating a world where effects preceded their causes. When objects appear different in the mirror, scientists call them chiral. Hands, for instance, are chiral. Imagine Alice trying to shake hands with her reflection. A right hand in mirror-world becomes a left hand, and there's no way to align the two perfectly for a handshake because the fingers bend the wrong way. (In fact, the word 'chirality' originates from the Greek word for 'hand.') Alice's experience reflects something deep about our own universe: Everything is not the same through the looking glass. The behavior of many familiar objects, from molecules to elementary particles, depends on which mirror-image version we interact with. Mirror Milk At the beginning of Through the Looking-Glass , Alice holds her cat Kitty up to the mirror and threatens to push her through to the other side. 'I wonder if they'd give you milk in there? Perhaps Looking-glass milk isn't good to drink,' she says. Alice was onto something. Just over two decades before the book's publication, Louis Pasteur discovered while experimenting with some expired wine that certain molecules can be chiral. They can come in distinct left-handed and right-handed structural forms that are impossible to superimpose. Pasteur found that, while they contain all the same components, the mirror versions of chiral molecules can serve distinct chemical functions. The pioneering French chemist and microbiologist Louis Pasteur discovered the chirality of biomolecules in the late 1840s. Photograph: Smithsonian Institution Librarie Lactose, the sugar found in milk, is chiral. While either version can be synthesized, the sugars produced and consumed by living organisms are always the right-handed ones. In fact, life as we know it uses only right-handed sugars—hence why the genetic staircase of DNA always twists to the right. The root of this 'homochirality' remains one of the biggest mysteries clouding the origins of life. Kitty couldn't have digested looking-glass milk. Worse, if it had contained any bacteria with the opposite handedness, her immune system and antibiotics would have been ill suited to put up a fight. A group of prominent scientists recently cautioned against the synthesis of mirror-image lifeforms for this reason—if any were to escape the lab, they could evade regular lifeforms' defense mechanisms. Shrinking Down Continuing down the rabbit hole, we see traces of chirality all the way to elementary particles. Pasteur's work on molecules rested on a previous discovery by Augustin-Jean Fresnel, who in 1822 realized that different quartz prisms could send light's electric field twirling in one of two directions—clockwise or counterclockwise. If each particle of light could leave a smoke trail in its wake, a right-handed screw of smoke would emerge from one prism and a left-handed screw from another. Nowadays, physicists consider chirality a fundamental property of all elementary particles, just like charge or mass. The particles that don't have mass are always traveling at the speed of light, and they also all carry an intrinsic angular momentum as though they're spinning like a top. If the particles are flying in the direction of your thumb, their spin follows the direction your fingers curl—on either your right hand or your left. The situation is a bit more complicated for massive particles, such as electrons and quarks. Because a massive particle travels more slowly, a speedy observer could overtake it and effectively reverse its direction of motion, thus flipping its apparent handedness. For this reason, when describing the chirality of massive particles, physicists often refer to the mathematical description of the particle's quantum properties. When you rotate a particle, its quantum wave function shifts left or right depending on its chirality. Almost every elementary particle has a twin through the looking glass. A negatively charged left-handed electron is mirrored by the anti-positron, a negatively charged right-handed particle. In looking-glass world, Alice finds all logic turned on its head: People run in order to stay in place, and they celebrate 'un-birthdays' on all the days they weren't born. Similarly, our universe differs from its mirror image. The weak force—the force that's responsible for radioactive decay—is felt only by left-handed particles. This means that some particles will decay in the normal world while their counterparts in the mirror would not. Plus, there's one particle that seems not to show up in the mirror at all. The neutrino has only ever been observed in its left-handed form. Particle physicists are investigating whether the right-handed neutrino exists or if neutrinos' mirror images are simply identical, which could help explain why the universe contains something rather than nothing. There's a lot we can learn about our own world by peering through the looking glass. Just be careful not to drink the milk. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


WIRED
13-04-2025
- Science
- WIRED
Small Language Models Are the New Rage, Researchers Say
Apr 13, 2025 2:00 AM Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools. Illustration: Celsius Pictor for Quanta Magazine The original version of this story appeared in Quanta Magazine. Large language models work well because they're so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of 'parameters'—the adjustable knobs that determine connections among data and get tweaked during the training process. With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost. Training a model with hundreds of billions of parameters takes huge computational resources. To train its Gemini 1.0 Ultra model, for example, Google reportedly spent $191 million. Large language models (LLMs) also require considerable computational power each time they answer a request, which makes them notorious energy hogs. A single query to ChatGPT consumes about 10 times as much energy as a single Google search, according to the Electric Power Research Institute. In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts. Small models are not used as general-purpose tools like their larger cousins. But they can excel on specific, more narrowly defined tasks, such as summarizing conversations, answering patient questions as a health care chatbot, and gathering data in smart devices. 'For a lot of tasks, an 8 billion–parameter model is actually pretty good,' said Zico Kolter, a computer scientist at Carnegie Mellon University. They can also run on a laptop or cell phone, instead of a huge data center. (There's no consensus on the exact definition of 'small,' but the new models all max out around 10 billion parameters.) To optimize the training process for these small models, researchers use a few tricks. Large models often scrape raw training data from the internet, and this data can be disorganized, messy, and hard to process. But these large models can then generate a high-quality data set that can be used to train a small model. The approach, called knowledge distillation, gets the larger model to effectively pass on its training, like a teacher giving lessons to a student. 'The reason [SLMs] get so good with such small models and such little data is that they use high-quality data instead of the messy stuff,' Kolter said. Researchers have also explored ways to create small models by starting with large ones and trimming them down. One method, known as pruning, entails removing unnecessary or inefficient parts of a neural network—the sprawling web of connected data points that underlies a large model. Pruning was inspired by a real-life neural network, the human brain, which gains efficiency by snipping connections between synapses as a person ages. Today's pruning approaches trace back to a 1989 paper in which the computer scientist Yann LeCun, now at Meta, argued that up to 90 percent of the parameters in a trained neural network could be removed without sacrificing efficiency. He called the method 'optimal brain damage.' Pruning can help researchers fine-tune a small language model for a particular task or environment. For researchers interested in how language models do the things they do, smaller models offer an inexpensive way to test novel ideas. And because they have fewer parameters than large models, their reasoning might be more transparent. 'If you want to make a new model, you need to try things,' said Leshem Choshen, a research scientist at the MIT-IBM Watson AI Lab. 'Small models allow researchers to experiment with lower stakes.' The big, expensive models, with their ever-increasing parameters, will remain useful for applications like generalized chatbots, image generators, and drug discovery. But for many users, a small, targeted model will work just as well, while being easier for researchers to train and build. 'These efficient models can save money, time, and compute,' Choshen said. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.