Latest news with #ComputerScientists
Yahoo
11-07-2025
- Yahoo
Mathematicians Say There's a Number So Big, It's Literally the Edge of Human Knowledge
"Hearst Magazines and Yahoo may earn commission or revenue on some items through these links." Here's what you'll learn when you read this story: The Busy Beaver number, or BB(n), represents a mathematical problem that tries to calculate the longest possible run-time of a Turing machine recording 1s and 0s on an infinitely long tape for various sets of instructions called states. While the first three BB(n)s equal 1, 6, and 21, respectively, it takes 107 steps to get to BB(4), and BB(5) produces a staggering 17 trillion possible Turing machines with a 47,176,870 steps being the answer. Now, as mathematicians attempt to wrap their minds around answer BB(6), they're beginning to realize that even expressing the unfathomably large number, likely bigger than the number of atoms in the universe, is itself a problem. For most people, the term 'busy beaver' brings to mind a tireless worker or (for the biologists among us) an absolutely vital ecosystem engineer. However, for mathematicians, 'busy beaver' takes on a similar-yet-unique meaning. True to the moniker's original intent, the idea represents a lot of work, but it's work pointed at a question. As computer scientists Christopher Moore puts it in a video for Quanta Magazine, 'What's the longest, most-complicated thing [a computer] can do, and then stop?' In other words, what's the longest function a computer can run that does not just run forever, stuck in an infinite loop? The solution to this question is called the Busy Beaver number, or BB(n), where the n represents a number of instructions called 'states' that a set of computers—specifically, a type of simple computer called a Turing machine—has to follow. Each state produces a certain number of programs, and each program gets its own Turing machine, so things get complicated fast. BB(1), which has just 1 state, necessitates the use of 25 Turing machines. For decades, many mathematicians believed that solving the Busy Beaver number to four states was the upper limit, but a group of experts managed to confirmed the BB(5) solution in 2024 (on Discord, of all places). Now, participants in that same Busy Beaver Challenge are learning fascinating truths about the next frontier—BB(6)—and how it just might represent the very edge of mathematics, according to a new report by New Scientist. First, a brief explanation. The aforementioned BB(1), which is the simplest version of the BB(n) problem, uses just one set of rules and produces only two outcomes—infinitely moving across the tape, or stopping at the first number. Because 1 is the most amount of steps that any of the 25 Turing machines of BB(1) will complete before finishing its program (known as halting), the answer to BB(1) is 1. As the number of states increases, so do the steps and the number of Turing machines needed to run the programs, meaning that each subsequent BB(n) is exponentially more taxing to solve. BB(2) and BB(3) are 6 and 21 respectively, but BB(4) is 107 and takes seven billion different Turing machines to solve. Granted, many of these machines continue on indefinitely and can be discarded, but many do not. The Busy Beaver number was first formulated by the Hungarian mathematician Tiber Radó in 1962, and 12 years passed before computer scientist Allen Brady determined that BB(4) runs for 107 steps before halting. For decades, this seemed like the absolute limit of what was discernible, but then mathematicians solved BB(5) in 2024 after sifting through 17 trillion (with a t) possible Turing machines. The answer? An astounding 47,176,870 steps. Quanta Magazine has an excellent explainer about how this was achieved. But finally solving BB(5) presented the next obvious question: What about BB(6)? Of course, adding just one more rule makes the problem beyond super exponentially harder, as BB(6) is estimated to require 60 quadrillion Turing machines. 'The Busy Beaver problem gives you a very concrete scale for pondering the frontier of mathematical knowledge,' computer scientist Tristan Stérin, who helped start the Busy Beaver Challenge in 2022, told New Scientist. In a new post, anonymous user 'mxdys'—who was instrumental in finally confirming BB(5)—wrote that the answer to BB(6) is likely so unfathomably large that the number itself likely needs its own explanation, as it's likely too big to describe via exponentiation. Instead, it relies on tetration (written as, say, yx, as opposed to exponentiation's xy) in which where the exponent is also iterated, creating a tower of exponents. As Scott Aaronson, an American computer scientist who helped define BB(5), notes on his blog, that means 1510 can be thought of as '10 to the 10 to the 10 and so on 15 times.' As mxdys notes, BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. One mathematician speaking with New Scientist said that it's likely that the number of all atoms in the universe would look 'puny' by comparison. While these large numbers boggle the mind, they also tell mathematicians about the limitations of the foundation of modern mathematics—known as Zermelo–Fraenkel set theory (ZFC)—as well slippery mathematical concepts like the Collatz conjecture. It's unlikely that mathematicians will ever solve BB(6), but if the Busy Beaver Challenge is any evidence, that fact likely won't stop them from trying. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?


Forbes
18-05-2025
- Science
- Forbes
AI's Magic Cycle
Linkedin: Here's some of what innovators are thinking about with AI research today Artificial Intelligence concept - 3d rendered image. When people talk about the timeline of artificial intelligence, many of them start in the 21st century. That's forgivable if you don't know a lot about the history of how this technology evolved. It's only in this new millennia that most people around the world got a glimpse of what the future holds with these powerful LLM systems and neural networks. But for people who have been paying attention and understand the history of AI, it really goes back to the 1950s. In 1956, a number of notable computer scientists and mathematicians met at Dartmouth to discuss the evolution of intelligent computation systems. And you could argue that the idea of artificial intelligence really goes back much further than that. When Charles Babbage made his analytical engine decades before, even rote computation wasn't something that machines could do. But when the mechanical became digital, and data became more portable in computation systems, we started to get those kinds of calculations and computing done in an automated way. Now there's the question of why artificial intelligence didn't come along in the 1950s, or in the 1960s, or in the 1970s. 'The term 'Artificial Intelligence' itself was introduced by John McCarthy as the main vision and ambition driving research defined moving forward,' writes Alex Mitchell at Expert Beacon. '65 years later, that pursuit remains ongoing.' What it comes down to, I think most experts would agree, is that we didn't have the hardware. In other words, you can't build human-like systems when your input/output medium is magnetic tape. But in the 1990s, the era of big data was occurring, and the cloud revolution was happening. And when those were done, we had all of the systems we needed to host LLM intelligence. Just to sort of clarify what we're talking about here, most of the LLMs that we use work on the context of next-word or next-token analysis – they're not sentient, per se, but they're using elegant and complex data sets to mimic intelligence. And to do that, they need big systems. That's why the colossal data centers are being built right now, and why they require so much energy, so much cooling, etc. At an Imagination in Action event this April, I talked to Yossi Mathias, a seasoned professional with 19 years at Google who is the head of research at Google, about research there and how it works. He talked about a cycle for a research motivation that involves publishing, vetting and applying back to impact. But he also spoke to that idea that AI really goes back father than most people think. 'It was always there,' he said, invoking the idea of the Dartmouth conference and what it represented. 'Over the years, the definition of AI has shifted and changed. Some aspects are kind of steady. Some of them are kind of evolving.' Then he characterized the work of a researcher, to compare motives for groundbreaking work. 'We're curious as scientists who are looking into research questions,' he said, 'but quite often, it's great to have the right motivation to do that, which is to really solve an important problem.' 'Healthcare, education, climate crisis,' he continued. 'These are areas where making that progress, scientific progress …actually leads into impact, that is really impacting society and the climate. So each of those I find extremely rewarding, not only in the intellectual curiosity of actually addressing them, but then taking that and applying it back to actually get into the impact that they'd like to get.' Ownership of a process, he suggested, is important, too. 'An important aspect of talking about the nature of research at Google is that we are not seeing ourselves as a place where we're looking into research results, and then throwing them off the fence for somebody else to pick up,' he said. 'The beauty is that this magic cycle is really part of what we're doing.' He talked about teams looking at things like flood prediction,where he noted to so potential for future advancements. We also briefly went over the issue of quantum computing,where Mathias suggested there's an important milestone ahead. 'We can actually reduce the quantum error, which is one of the hurdles, technological hurdles,' he said. 'So we see good progress, obviously, on our team.' One thing Mathias noted was the work of Peter Shore, whose algorithm, he suggested, demonstrated some of the capabilities that quantum research could usher in. 'My personal prediction is that as we're going to get even closer to quantum computers that work, we're going to see many more use cases that we're not even envisioning today,' he noted. Later, Mathias spoke about his notion that AI should be assistiveto humans, and not a replacement for human involvement. 'The fun part is really to come together, to brainstorm, to come up with ideas on things that we never anticipated coming upwith, and to try out various stuff,' he said. Explaining how AI can fill in certain gaps in the scientific process, he described a quick cycle by which, by the time a paper is published on a new concept, that new concept can already be in place in, say, a medical office. 'The one area that I expect actually AI to do much more (in) is really (in) helping our doctors and nurses and healthcare workers,' Mathias said. I was impressed by the scope of what people have done, at Google and elsewhere. So whether it's education or healthcare or anything else, we're likely to see quick innovation, and applications of these technologies to our lives. And that's what the magic cycle is all about.


CNET
09-05-2025
- Science
- CNET
You Can Now Use LegoGPT to Turn Your Text Inputs Into Lego Designs
Ever wanted to take your Lego building game to the next level? A team of computer scientists at Carnegie Mellon University built LegoGPT, the first AI model that takes text inputs and turns them into physically stable Lego designs. Instead of using an AI generator that will churn out a potentially wacky design to fit the request you input, LegoGPT's designs abide by the laws of physics. Read more: The best Lego kits According to the team's research, which can be found on GitHub, the AI model was trained on a dataset of over 47,000 Lego structures with 28,000 unique 3D objects. The designs generated with LegoGPT were physically stable 98% of the time. The tool is free to access on GitHub. You can start by uploading pictures of your existing blocks to determine which building options you have.