logo
Algebra is more than alphabet soup – it's the language of algorithms and relationships

Algebra is more than alphabet soup – it's the language of algorithms and relationships

Yahoo15-05-2025

You scrambled up a Rubik's cube, and now you want to put it back in order. What sequence of moves should you make?
Surprise: You can answer this question with modern algebra.
Most folks who have been through high school mathematics courses will have taken a class called algebra – maybe even a sequence of classes called algebra I and algebra II that asked you to solve for x. The word 'algebra' may evoke memories of complicated-looking polynomial equations like ax² + bx + c = 0 or plots of polynomial functions like y = ax² + bx + c.
You might remember learning about the quadratic formula to figure out the solutions to these equations and find where the plot crosses the x-axis, too.
Equations and plots like these are part of algebra, but they're not the whole story. What unifies algebra is the practice of studying things – like the moves you can make on a Rubik's cube or the numbers on a clock face you use to tell time – and the way they behave when you put them together in different ways. What happens when you string together the Rubik's cube moves or add up numbers on a clock?
In my work as a mathematician, I've learned that many algebra questions come down to classifying objects by their similarities.
How did equations like ax² + bx + c = 0 and their solutions lead to abstract algebra?
The short version of the story is that mathematicians found formulas that looked a lot like the quadratic formula for polynomial equations where the highest power of x was three or four. But they couldn't do it for five. It took mathematician Évariste Galois and techniques he developed – now called group theory – to make a convincing argument that no such formula could exist for polynomials with a highest power of five or more.
So what is a group, anyway?
It starts with a set, which is a collection of things. The fruit bowl in my kitchen is a set, and the collection of things in it are pieces of fruit. The numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 also form a set. Sets on their own don't have too many properties – that is, characteristics – but if we start doing things to the numbers 1 through 12, or the fruit in the fruit bowl, it gets more interesting.
Let's call this set of numbers 1 through 12 'clock numbers.' Then, we can define an addition function for the clock numbers using the way we tell time. That is, to say '3 + 11 = 2' is the way we would add 3 and 11. It feels weird, but if you think about it, 11 hours past 3 o'clock is 2 o'clock.
Clock addition has some nice properties. It satisfies:
closure, where adding things in the set gives you something else in the set,
identity, where there's an element that doesn't change the value of other elements in the set when added – adding 12 to any number will equal that same number,
associativity, where you can add wherever you want in the set,
inverses, where you can undo whatever an element does, and
commutativity, where you can change the order of which clock numbers you add up without changing the outcome: a + b = b + a.
By satisfying all these properties, mathematicians can consider clock numbers with clock addition a group. In short, a group is a set with some way of combining the elements layered on top. The set of fruit in my fruit bowl probably can't be made into a group easily – what's a banana plus an apple? But we can make a set of clock numbers into a group by showing that clock addition is a way of taking two clock numbers and getting to a new one that satisfies the rules outlined above.
Along with groups, the two other fundamental types of algebraic objects you would study in an introduction to modern algebra are rings and fields.
We could introduce a second operation for the clock numbers: clock multiplication, where 2 times 7 is 2, because 14 o'clock is the same as 2 o'clock. With clock addition and clock multiplication, the clock numbers meet the criteria for what mathematicians call a ring. This is primarily because clock multiplication and clock addition together satisfy a key component that defines a ring: the distributive property, where a(b + c) = ab + ac. Lastly, fields are rings that satisfy even more conditions.
At the turn of the 20th century, mathematicians David Hilbert and Emmy Noether – who were interested in understanding how the principles in Einstein's relativity worked mathematically – unified algebra and showed the utility of studying groups, rings and fields.
Groups, rings and fields are abstract, but they have many useful applications.
For example, the symmetries of molecular structures are categorized by different point groups. A point group describes ways to move a molecule in space so that even if you move the individual atoms, the end result is indistinguishable from the molecule you started with.
But let's take a different example that uses rings instead of groups. You can set up a pretty complicated set of equations to describe a Sudoku puzzle: You need 81 variables to represent each place you can put a number in the grid, polynomial expressions to encode the rules of the game, and polynomial expressions that take into account the clues already on the board.
To get the spaces on the game board and the 81 variables to correspond nicely, you can use two subscripts to associate the variable with a specific place on the board, like using x₃₅ to represent the cell in the third row and fifth column.
The first entry must be one of the numbers 1 through 9, and we represent that relationship with (x₁₁ - 1)(x₁₁ - 2)(x₁₁ - 3) ⋅⋅⋅ (x₁₁ - 9). This expression is equal to zero if and only if you followed the rules of the game. Since every space on the board follows this rule, that's already 81 equations just to say, 'Don't plug in anything other than 1 through 9.'
The rule '1 through 9 each appear exactly once in the top row' can be captured with some sneaky pieces of algebraic thinking. The sum of the top row is going to add up to 45, which is to say x₁₁ + x₁₂ + ⋅⋅⋅ + x₁₉ - 45 will be zero, and the product of the top row is going to be the product of 1 through 9, which is to say x₁₁ x₁₂ ⋅⋅⋅ x₁₉ - 9⋅8⋅7⋅6⋅5⋅4⋅3⋅2⋅1 will be zero.
If you're thinking that it takes more time to set up all these rules than it does to solve the puzzle, you're not wrong.
What do we get by doing this complicated translation into algebra? Well, we get to use late-20th century algorithms to figure out what numbers you can plug into the board that satisfy all the rules and all the clues. These algorithms are based on describing the structure of the special ring – called an ideal – these game board clues make within the larger ring. The algorithms will tell you if there's no solution to the puzzle. If there are multiple solutions, the algorithms will find them all.
This is a small example where setting up the algebra is harder than just doing the puzzle. But the techniques generalize widely. You can use algebra to tackle problems in artificial intelligence, robotics, cryptography, quantum computing and so much more – all with the same bag of tricks you'd use to solve the Sudoku puzzle or Rubik's cube.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Courtney Gibbons, Hamilton College
Read more:
X marks the unknown in algebra – but X's origins are a math mystery
Emmy Noether faced sexism and Nazism – over 100 years later her contributions to ring theory still influence modern math
Taking a leap of faith into imaginary numbers opens new doors in the real world through complex analysis
Courtney Gibbons is affiliated with the Association for Women in Mathematics and the American Mathematical Society.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Algebra is more than alphabet soup – it's the language of algorithms and relationships
Algebra is more than alphabet soup – it's the language of algorithms and relationships

Yahoo

time15-05-2025

  • Yahoo

Algebra is more than alphabet soup – it's the language of algorithms and relationships

You scrambled up a Rubik's cube, and now you want to put it back in order. What sequence of moves should you make? Surprise: You can answer this question with modern algebra. Most folks who have been through high school mathematics courses will have taken a class called algebra – maybe even a sequence of classes called algebra I and algebra II that asked you to solve for x. The word 'algebra' may evoke memories of complicated-looking polynomial equations like ax² + bx + c = 0 or plots of polynomial functions like y = ax² + bx + c. You might remember learning about the quadratic formula to figure out the solutions to these equations and find where the plot crosses the x-axis, too. Equations and plots like these are part of algebra, but they're not the whole story. What unifies algebra is the practice of studying things – like the moves you can make on a Rubik's cube or the numbers on a clock face you use to tell time – and the way they behave when you put them together in different ways. What happens when you string together the Rubik's cube moves or add up numbers on a clock? In my work as a mathematician, I've learned that many algebra questions come down to classifying objects by their similarities. How did equations like ax² + bx + c = 0 and their solutions lead to abstract algebra? The short version of the story is that mathematicians found formulas that looked a lot like the quadratic formula for polynomial equations where the highest power of x was three or four. But they couldn't do it for five. It took mathematician Évariste Galois and techniques he developed – now called group theory – to make a convincing argument that no such formula could exist for polynomials with a highest power of five or more. So what is a group, anyway? It starts with a set, which is a collection of things. The fruit bowl in my kitchen is a set, and the collection of things in it are pieces of fruit. The numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 also form a set. Sets on their own don't have too many properties – that is, characteristics – but if we start doing things to the numbers 1 through 12, or the fruit in the fruit bowl, it gets more interesting. Let's call this set of numbers 1 through 12 'clock numbers.' Then, we can define an addition function for the clock numbers using the way we tell time. That is, to say '3 + 11 = 2' is the way we would add 3 and 11. It feels weird, but if you think about it, 11 hours past 3 o'clock is 2 o'clock. Clock addition has some nice properties. It satisfies: closure, where adding things in the set gives you something else in the set, identity, where there's an element that doesn't change the value of other elements in the set when added – adding 12 to any number will equal that same number, associativity, where you can add wherever you want in the set, inverses, where you can undo whatever an element does, and commutativity, where you can change the order of which clock numbers you add up without changing the outcome: a + b = b + a. By satisfying all these properties, mathematicians can consider clock numbers with clock addition a group. In short, a group is a set with some way of combining the elements layered on top. The set of fruit in my fruit bowl probably can't be made into a group easily – what's a banana plus an apple? But we can make a set of clock numbers into a group by showing that clock addition is a way of taking two clock numbers and getting to a new one that satisfies the rules outlined above. Along with groups, the two other fundamental types of algebraic objects you would study in an introduction to modern algebra are rings and fields. We could introduce a second operation for the clock numbers: clock multiplication, where 2 times 7 is 2, because 14 o'clock is the same as 2 o'clock. With clock addition and clock multiplication, the clock numbers meet the criteria for what mathematicians call a ring. This is primarily because clock multiplication and clock addition together satisfy a key component that defines a ring: the distributive property, where a(b + c) = ab + ac. Lastly, fields are rings that satisfy even more conditions. At the turn of the 20th century, mathematicians David Hilbert and Emmy Noether – who were interested in understanding how the principles in Einstein's relativity worked mathematically – unified algebra and showed the utility of studying groups, rings and fields. Groups, rings and fields are abstract, but they have many useful applications. For example, the symmetries of molecular structures are categorized by different point groups. A point group describes ways to move a molecule in space so that even if you move the individual atoms, the end result is indistinguishable from the molecule you started with. But let's take a different example that uses rings instead of groups. You can set up a pretty complicated set of equations to describe a Sudoku puzzle: You need 81 variables to represent each place you can put a number in the grid, polynomial expressions to encode the rules of the game, and polynomial expressions that take into account the clues already on the board. To get the spaces on the game board and the 81 variables to correspond nicely, you can use two subscripts to associate the variable with a specific place on the board, like using x₃₅ to represent the cell in the third row and fifth column. The first entry must be one of the numbers 1 through 9, and we represent that relationship with (x₁₁ - 1)(x₁₁ - 2)(x₁₁ - 3) ⋅⋅⋅ (x₁₁ - 9). This expression is equal to zero if and only if you followed the rules of the game. Since every space on the board follows this rule, that's already 81 equations just to say, 'Don't plug in anything other than 1 through 9.' The rule '1 through 9 each appear exactly once in the top row' can be captured with some sneaky pieces of algebraic thinking. The sum of the top row is going to add up to 45, which is to say x₁₁ + x₁₂ + ⋅⋅⋅ + x₁₉ - 45 will be zero, and the product of the top row is going to be the product of 1 through 9, which is to say x₁₁ x₁₂ ⋅⋅⋅ x₁₉ - 9⋅8⋅7⋅6⋅5⋅4⋅3⋅2⋅1 will be zero. If you're thinking that it takes more time to set up all these rules than it does to solve the puzzle, you're not wrong. What do we get by doing this complicated translation into algebra? Well, we get to use late-20th century algorithms to figure out what numbers you can plug into the board that satisfy all the rules and all the clues. These algorithms are based on describing the structure of the special ring – called an ideal – these game board clues make within the larger ring. The algorithms will tell you if there's no solution to the puzzle. If there are multiple solutions, the algorithms will find them all. This is a small example where setting up the algebra is harder than just doing the puzzle. But the techniques generalize widely. You can use algebra to tackle problems in artificial intelligence, robotics, cryptography, quantum computing and so much more – all with the same bag of tricks you'd use to solve the Sudoku puzzle or Rubik's cube. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Courtney Gibbons, Hamilton College Read more: X marks the unknown in algebra – but X's origins are a math mystery Emmy Noether faced sexism and Nazism – over 100 years later her contributions to ring theory still influence modern math Taking a leap of faith into imaginary numbers opens new doors in the real world through complex analysis Courtney Gibbons is affiliated with the Association for Women in Mathematics and the American Mathematical Society.

Physicists Designed a Quantum Rubik's Cube And Found The Best Way to Solve It
Physicists Designed a Quantum Rubik's Cube And Found The Best Way to Solve It

Yahoo

time15-04-2025

  • Yahoo

Physicists Designed a Quantum Rubik's Cube And Found The Best Way to Solve It

Quantum physics already feels like a puzzle, but now scientists have made it more literal. A team of mathematicians from the University of Colorado Boulder has designed a quantum Rubik's cube, with infinite possible states and some weird new moves available to solve it. The classic (and classical) Rubik's cube is what's known as a permutation puzzle, which requires players to perform certain actions to rearrange one of a number of possible permutations into a 'solved' state. In the case of the infamous cube, that's around 43 quintillion possible combinations of small colored blocks being sorted into six, consistently-colored faces through a series of constrained movements. But a quantum Rubik's cube cranks that possibility space up to infinity. All it takes is to give the solver a new quantum action – the ability to move a piece into a quantum superposition where it is both moved and not moved at the same time. "With superpositions, the number of unique allowed states of the puzzle is infinite, unlike common permutation puzzles from toy stores," the researchers write in their paper. The team tested the idea on a simple version of a permutation puzzle: a two dimensional, 2×2 grid made up of just blue and green tiles. The solved state was to place the two green tiles above the two blue ones. In its classical form, the puzzle only has six possible permutations, including the solved state. Any state can be transformed into any other through a sequence of swapping vertical and horizontal tiles – swapping diagonal tiles is forbidden, as is rotating the whole puzzle. This basic puzzle can be given a quantum flavor by calling the colors 'particles,' and pointing out that because each tile is indistinguishable from the other of the same color, they are in a sense entangled. Though the 'particles' have a quantum touch, in practice the puzzle itself is still played using classical moves. A truly quantum version opens up when superpositions between two different particles are allowed. Three different types of simulated players were put to work solving the puzzle from 2,000 random scrambles. A classical solver's only move was to swap two adjacent tiles. A quantum solver could only enter pairs into quantum superpositions. And a combined solver could perform either action each time. Unsurprisingly, the combined solver performed the best, solving the puzzle in an average of 4.77 moves. The quantum solver was next, with an average of 5.32 moves, while the classical solver came in last place with 5.88 moves on average. That's not to say the realm of classical physics doesn't have its advantages though. The classical solver can actually reach the solution in fewer than five moves more often than the quantum solver. But it blows out its average because it can often take twice that long, where the quantum solver almost always finishes in eight moves or less. This so-called quantum advantage should become more pronounced with more complex puzzles, the team says. After a solver works through the permutations using their allowed moves – either classical, quantum, or both – the solution is then verified through a 'referee.' If you're familiar with the old Schrödinger's cat thought experiment, you'll remember that the measurement itself causes the superposition to randomly become just one of the states. Ideally, that would be the solved state, but if not, the puzzle is scrambled again and the solver has to start over. That's how the classical solver can even begin to tackle a quantum puzzle. Unless they get extremely lucky and the scrambled state is one of the six classical possibilities (out of infinite quantum options), they'll have to make moves that get them as close as possible to the solution, and hope that the measurement collapses the superposition into the solved state. Although the quantum solver seems to have the home ground advantage, it has one downside – it takes two moves for it to do a classical swap operation. That's how the classical solver gets an early head start in some versions of the puzzle, but why the combined solver always has the lead. The team also went on to create a 3D version of the quantum puzzle, albeit not a full cube. It was 2x2x1 tiles, which also had infinite possibilities and could be solved through similar actions. In practice, quantum permutation puzzles could potentially be built using arrays of ultracold atoms suspended in optical lattices. But mostly, it's a thought experiment for math nerds. The research has been accepted for publication in the journal Physical Review A, and is currently available on the preprint server arXiv. Trillionth of a Second Shutter Speed Camera Snaps Chaos in Action We Now Know Better Than Ever What a Ghost Particle Doesn't Weigh This Bizarre Shape-Shifting Liquid Bends The Laws of Thermodynamics

Scientists Quantified The Speed of Human Thought, And It's a Big Surprise
Scientists Quantified The Speed of Human Thought, And It's a Big Surprise

Yahoo

time23-12-2024

  • Yahoo

Scientists Quantified The Speed of Human Thought, And It's a Big Surprise

The speed of the human brain's ability to process information has been investigated in a new study, and according to scientists, we're not as mentally quick as we might like to think. In fact, research suggests our brains process information at a speed of just 10 bits per second. But how is this possible, in comparison to the trillions of operations computers can perform every second? Research suggests this is the result of how we internally process thoughts in single file, making for a slow, congested queue. This stands in stark contrast to the way the peripheral nervous system operates, amassing sensory data at gigabits a second in parallel, magnitudes higher than our paltry 10-bit cognitive computer. To neurobiologists Jieyu Zheng and Markus Meister from the California Institute of Technology, this mismatch in sensory input and processing speed poses something of a mystery. "Every moment, we are extracting just 10 bits from the trillion that our senses are taking in and using those 10 to perceive the world around us and make decisions," says Meister. "This raises a paradox: What is the brain doing to filter all of this information?" In their recently published paper, Zheng and Meister raise a clear defense of the suggestion that in spite of the richness of the scenery in our mind's eye, the existence of photographic memory, and the potential of unconscious processing, our brains really do operate at a mind-numbingly slow pace that rarely peaks above tens of bits a second. According to the researchers, solving a Rubik's cube blindfolded requires processing of just under 12 bits a second. Playing the strategy computer game StarCraft at a professional level? Around 10 bits a second. Reading this article? That might stretch you to 50 bits a second, at least temporarily. Assuming it's true, the pair lay out the state of research on the disparity between our "outer brain's" processing of external stimuli and the "inner brain's" calculations, demonstrating just how little we know about our own thinking. "The current understanding is not commensurate with the enormous processing resources available, and we have seen no viable proposal for what would create a neural bottleneck that forces single-strand operation," the authors write. The human brain is a beast when it comes to pure analytical power. Its 80-odd-billion neurons form trillions of connections grouped in ways that allow us to feel, imagine, and plan our way through existence with other humans by our sides. Fruit flies, on the other hand, have maybe a hundred thousand or so neurons, which is plenty enough for them to find food, flap about, and talk fly-business with other flies. Why couldn't a single human brain behave like a swarm of flies, each unit processing a handful of bits each second collectively at super speed? Though there are no obvious answers, Zheng and Meister propose it may simply have to do with necessity. Or rather, a lack of necessity. "Our ancestors have chosen an ecological niche where the world is slow enough to make survival possible," the team writes. "In fact, the 10 bits per second are needed only in worst-case situations, and most of the time our environment changes at a much more leisurely pace." Research into comparable rates of processing in other species is remarkably limited, the pair explain, though what they could locate seems to validate a view that generally our external environment only changes at a rate that requires decision-making to occur at a few bits a second. What might we make of a future where we demand more of our bottlenecked brains, perhaps through technological advances that link our single-file cognitive computing directly with a computer's parallel processing? Knowing how our brains evolved could give us insights into both improving artificial intelligence and shaping it to suit our especially particular neural architecture. At the very least, it could reveal the deeper benefits of slowing down and approaching the world one simple question at a time. This perspective was published in Neuron. There's Something Strange About These Ancient Egyptian Sheep Horns Flat Earthers Went to Antarctica to Look at The Sun. Here's What Happened. Twins Were Typical Among Our Primate Ancestors. What Changed?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store