logo
#

Latest news with #Sudoku

25 Next-Level Stationery And School Supplies
25 Next-Level Stationery And School Supplies

Buzz Feed

time9 hours ago

  • General
  • Buzz Feed

25 Next-Level Stationery And School Supplies

A set of 10 retractable Crayola markers, which honestly should have always existed. How many of us suffered the fate of losing a marker cap or attempted to color only to find the red has completely dried out? Kids these days have it easy — just click and a fresh marker tip pops out, ready for art. Stackable octopus highlighters that I can almost guarantee no one else will have. Your child? A trendsetter. You? A thief already plotting how to kidnap these cute lil' critters. A mechanical calculator with typewriter-style keys that'll make crunching the numbers oh-so satisfying. Whether your kiddo needs to do some quick calculations or you need to check their math homework, this battery-powered gadget will make everyone feel like an old-timey accountant. An XL gluestick for massive projects that need more than a lil' adhesive. Small children may need to use both hands when holding this thing, which is really cute to imagine. A pack of pastel pencils with dyed wood, so when you sharpen them, they produce lil' rainbow arches. They're made with #2 HB graphite so Scantron tests are about to get a whole lot more whimsical. Ticonderoga erasers shaped like classic yellow pencils offering a somewhat confusing notetaking experience. Unlike *some* novelty erasers, these actually get the job done, sans smudges. Or some animal-themed erasers with caps to keep them safe in between mistakes (whether they be homework or Sudoku related). You can get a big pack of 8 or 12, so feel free to share with friends and classmates! A sheet of stickers featuring little checkbox droplets to help students keep track of their water intake. Place them in an agenda as a cute reminder to stay hydrated. Remembering to drink water never gets easier, so grab some for yourself as well! A large pencil case with a gazillion different pockets and roomy compartments for everything your child could possibly need during a school day. Gel pens, markers, tape, scissors, and more will have a home! Gee, maybe you need one too for your crafting tools? Or a plush lobster pencil case featuring a zippered smile that is just — OK, I need a minute — TOO CUTE! It might not fit a ton of stuff, but how many pens does one person really need? A splurgey Logitech mechanical keyboard you can gift your child in a last-ditch effort to convince them to take care of their poor wrists. It has nice, clackety keys, RGB backlighting, and comes with a cute wrist pillow that'll feel like resting on a cloud. I don't blame you if you find yourself using their computer whenever you wanna check your email. Kutsuwa highlighter pencils that will have everyone swearing off the marker version for life. These offer more control, don't smudge or bleed, and don't get ink on your hands. Plus, they're just a really cool twist on a school supply classic! A miniature package (complete with faux packing tape and bubble wrap!!!!) with a surprise shipment inside: STICKERS! Agendas, scrapbooks, and calendars will feel so much more lively when decked out in themed, glittery stickies. Kinda losing it over how cute this presentation is. Hope your kid shares some with you! A huge mechanical pencil set featuring six pencils, two widths of graphite refills, nine eraser toppers, and three gradient-style erasers. The whole kit has a dreamy pastel theme that's sure to make you wonder if you should grab one for yourself as well. It's an Amazon top seller, so lots of people had the same thought! A big pack of tempera paint sticks that are easier to handle than paintbrushes and are just sooo satisfying to use. They glide easily, dry fast, and have vibrant hues that show up on even unconventional canvases, like pumpkins. Uniquely shaped Mr. Pen sharpeners with built-in erasers, because who doesn't love a multitasker? They're easy for small hands to hold and come with holes in the center, which I can only assume are for sick pen tricks. "Liquipens" — a cross between a pen and a lava lamp. These are fun to play with and likely mesmerizing enough to keep everyone entertained while taking notes in class or trudging through homework. An expanding file folder for all their homework assignments, field trip forms, and loose documents. As an accordion folder follower, I have to say these things are ALSO top-notch for adults who want to organize everything from restaurant menus to ticket stubs. A pack of twistable crayons that people have described as a crayon-oil pastel hybrid. They glide like silk and help you create decadent, vibrant masterpieces. But the real selling point? The adorable bear-shaped caps! A pack of textured pencil sleeves with satisfying swirls and ridges kids and adults alike will love to play with while concentrating on work. These are great for people with anxiety, ADHD, autism, limited mobility, and more. Journal stamps that add some extra structure to your agenda. The 18-piece kit comes with various designs that'll allow you or your type-A student to add the days of the week, star ratings, reading lists, and more. A sweet pair of scissors housed in a fruit-shaped magnet holder your child can proudly display in their locker — it'll also make it easy to actually find those darn things when they need them. Might as well also grab a pair for yourself because they'd look soooo cute on your fridge. A weekly sticky notepad so you or your organized child can plan out the whole week and then stick it somewhere visible for easy reference. It's a lil' less intrusive and more free-flowing than a traditional calendar, like an analog Google cal. A pair of the *iconic* Kaco pens (known for their adorable heart tops) in your choice of Pantone color. They have every Color of the Year going back to 2000, so you can pick a year with sentimental the year your kid was born! A nose-shaped pencil sharpener because your actual nose won't sharpen anything. Kids will find this hilarious adults will also wanna pick this funny school supply.

Herald Sport 10-fer weekend quiz
Herald Sport 10-fer weekend quiz

NZ Herald

time2 days ago

  • Sport
  • NZ Herald

Herald Sport 10-fer weekend quiz

Listening to articles is free for open-access content—explore other articles or learn more about text-to-speech. All Blacks captain Richie McCaw holds aloft the Rugby World Cup trophy in 2011. Photo / Photosport Test your sporting knowledge in the Herald Sport weekend quiz. Can you get all 10 answers correct and claim a rare 10-fer? Want more quizzes? Test your knowledge in the Herald's Morning quiz and Afternoon quiz or take on the latest Sudoku or crosswords. For the latest sporting news head to the Herald Sport page - to keep updated on the All Blacks, Super Rugby Pacific, Black Caps, Silver Ferns, Warriors and all the Kiwi athletes competing around the world. Quiz compiled by Cameron McMillan.

Ask AI Why It Sucks at Sudoku. You'll Find Out Something Troubling About Chatbots
Ask AI Why It Sucks at Sudoku. You'll Find Out Something Troubling About Chatbots

CNET

time3 days ago

  • CNET

Ask AI Why It Sucks at Sudoku. You'll Find Out Something Troubling About Chatbots

Chatbots are genuinely impressive when you watch them do things they're good at, like writing a basic email or creating weird futuristic-looking images. But ask generative AI to solve one of those puzzles in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). A more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can try to justify it, or at least explain how you arrived at it. An AI model may not be able to accurately or transparently do the same. Would you trust it? Watch this: Telsa Found Liable for Autopilot accident, Tariffs Start to Impact Prices & More | Tech Today 03:08 Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four, or five moves ahead -- the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

CNET

time4 days ago

  • CNET

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."

Hard Sudoku: August 6, 2025
Hard Sudoku: August 6, 2025

Scientific American

time5 days ago

  • Entertainment
  • Scientific American

Hard Sudoku: August 6, 2025

Solve the grid in this hard Sudoku puzzle! The objective of Sudoku is to fill each row, column and sub-grid with exactly one of each number from 1-9. A conflict arises if you repeat any entry in the same row, column or sub-grid. For more, select "How to Play" in the game's dropdown menu. Use the "Play Together" option in the navigation bar to invite a friend to play this puzzle with you and enter numbers at the same time.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store