
Magnus Carlsen Beats ChatGPT in Chess Without Losing a Piece
Magnus Carlsen won the game without losing a single piece, while ChatGPT lost all its pawns, screenshots the Norwegian grandmaster shared on X showed. ChatGPT resigned the match.
'I sometimes get bored while travelling,' Carlsen captioned the post.
'That was methodical, clean, and sharp. Well played!' ChatGPT said to him, according to the screenshots Carlsen posted.
Carlsen told the AI bot that he thought it 'played really well in the opening,' but ultimately 'failed to follow it up correctly.' He went on to ask ChatGPT for feedback on his performance.
'Your play showed several strong traits,' ChatGPT told him. It complimented him for his opening, patience, tactical awareness, and endgame technique. It estimated his classical strength to be around 1800-2000 FIDE or USCF, though the AI bot acknowledged it could be higher.
Both FIDE and USCF ratings calculate chess players' skill levels based on their performance in official tournaments using the Elo rating system, which takes into account the outcomes of matches and the rating of each player going in. Under the system, matches with unexpected results—such as those in which lower-ranked players defeat higher-ranked ones—have a greater impact on players' ratings.
FIDE ratings are managed by global chess's governing body, the International Chess Federation (FIDE), and USCF ratings by the United States Chess Federation. Carlsen's actual FIDE rating is 2839, making him the top-rated player in the world.
Carlsen is widely considered to be the best chess player in history. The 34-year-old has won the World Chess Championship five times, most recently in 2021. He has not competed in the championship since then, previously saying, 'I don't have any inclination to play" in it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
an hour ago
- Business Insider
Your chats with Meta's AI might end up on Google — just like ChatGPT until it turned them off
OpenAI's ChatGPT raised some eyebrows this week when people realized that certain chats were able to be found by Google search. Although people had checked a box to share the chats publicly, it seemed likely that not everyone understood what they were doing. On Thursday, OpenAI said that it would stop having shared chats be indexed by Google. Meanwhile, Meta's stand-alone MetaAI app also allows users to share their chats — and it will continue to allow Google to index them, meaning that they can show up in a search. I did a bunch of Google searches and found lots of MetaAI conversations in the results. The Meta AI app, launched this spring, lets people share chats to a "Discover" feed. Google crawlers can "index" that feed and then serve up the results when people use Google search. So, for instance, if you do a site-specific search on Google for " and the keyword "balloons," you might come up with a chat someone had with the MetaAI bot about where to get the best birthday balloons — if that person tapped the button to allow the chat to be shared. As Business Insider reported in June, the Meta AI Discover feed had been full of examples of chats that seemed personal in nature — medical questions, specific career advice, relationship matters. Some contained identifying information like phone numbers, email addresses, or full names. Although all of these people did click to share, based on the personal nature of some of the chats, I could only guess that people might have misunderstood what it meant to share the conversation. After Business Insider wrote about this a few weeks ago, the Meta AI app made some tweaks to warn users more clearly about how the Discover feed works. Now, when you choose to share a conversation, you get a pop-up with the warning: "Conversations on feed are public so anyone can see them and engage." The additional warning seems to be working. Scrolling through the Discover feed, I now see mainly instances of people using it for image creation and far fewer accidental private text conversations (although there seemed to still be at least a few of those). Meanwhile, Daniel Roberts, a representative for Meta, confirmed that Meta AI chats that were shared to its Discover feed would continue to be indexed by Google. He reiterated the multi-step process I just described. For now, Meta AI can only be used via its mobile app, not the web. This might lead people to think that even the Discover feed exists as a sort of walled garden, separate from "the internet" and existing only within the Meta AI app. But posts from the Discover feed (and only those public posts) can be shared as links around the web — and that's where the Google indexing comes in. If this sounds slightly confusing, it is. That may also be confusing to users. Now, it's possible that some people really do want to share their AI chats with the general public, and are happy to have those chats show up on Google searches along with their Instagram or Facebook handles. But I'm still not sure I'd understand why anyone would want to share their interactions — or why anyone else would want to read them.

Engadget
2 hours ago
- Engadget
OpenAI is removing ChatGPT conversations from Google
OpenAI has removed a feature that made shared ChatGPT conversations appear in search results. The "short-lived experiment" was based on the chatbot's link creation option. After complaints, OpenAI's chief information security officer, Dane Stuckey, said the company is working to remove the chats from search engines. The public outrage stems from a Fast Company article from earlier this week (via Ars Technica ). Fast Company said it found thousands of ChatGPT conversations in Google search results. The indexed chats didn't explicitly include identifying information. But in some cases, their contents reportedly contained specific details that could point to the source. To be clear, this wasn't a hack or leak. It was tied to a box users could tick when creating a shareable URL directing to a chat. In the pop-up for creating a public link, the option to "Make this chat discoverable" appeared. The more direct explanation ("allows it to be shown in web searches") appeared in a smaller, grayer font below. Users had to tick that box to make the chat indexed. You may wonder why people creating a public link to a chat would have a problem with its contents being public. But Fast Company noted that people could have made the URLs to share in messaging apps or as an easy way to revisit the chats later. Regardless, the public discoverability option is gone now. In Fast Company 's report, Stuckey defended the feature's labeling as "sufficiently clear." But after the outcry grew, OpenAI relented. "Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Stuckey announced on Thursday.

Business Insider
2 hours ago
- Business Insider
OpenAI's education head says students should use ChatGPT as a tool, not 'an answer machine'
Luddites have no place in an AI-powered world, according to OpenAI 's vice president of education. "Workers who use AI in the workforce are incredibly more productive," Leah Belsky, who's been leading OpenAI's education team since 2024, said on an episode of the company's podcast on Friday. So learning to use the technology, she said, should start early. "Any graduate who leaves institution today needs to know how to use AI in their daily life," she said. "And that will come in both where they're applying for jobs as well as when they start their new job." Most schools have so far sought ways to prevent students from using AI rather than encouraging it or teaching it. This is partly because AI use in school is considered cheating. There is also concern that using AI can cause so-called "brain rot." Belsky thinks about it differently. "AI is ultimately a tool," she said, at one point comparing it to a calculator. "What matters most in an education space is how that tool is used. If students use AI as an answer machine, they are not going to learn. And so part of our journey here is to help students and educators use AI in ways that will expand critical thinking and expand creativity." The "core literacy" students should develop, she said, is coding. "Now, with vibe coding and now that there are all sorts of tools that make coding easier, I think we're going to get to a place where every student should not only learn how to use AI generally, but they should learn to use AI to create images, to create applications, to write code," she said. Vibe coding is the process of prompting AI in natural language to write code for whatever you want. It's been widely embraced, but most avoid using it for core technology since AI code is prone to errors. Anyone vibe coding would need some level of coding knowledge, or know someone who does, to check the AI's work. Perhaps the biggest concern about using AI in education is that it removes the element of "productive struggle" — a crucial part of how people learn and master new material. Belsky says OpenAI is developing technology to counter that. This week, OpenAI introduced "Study Mode" in ChatGPT, which provides students with "guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding," according to OpenAI's website. OpenAI is not the only technology company thinking about this topic. Kira Learning is a startup chaired by Google Brain founder Andrew Ng. It first launched in 2021 to help teachers without a background in computer science teach the subject effectively. The company launched a slate of AI agents earlier this year. The aim is to introduce "friction" into students' conversations with AI at the right stages so that they actually have a productive struggle and learn through the experience, Andre Pasinetti, cofounder and CEO of Kira, told Business Insider. For the near future, at least, the onus will likely be on tech companies to spearhead new ways to keep the learning in learning, as universities and educational institutions scramble to keep up. Tyler Cowen, a professor of economics at George Mason University, also talked about the state of the university in a conversation with podcaster Azeem Azhar this week. "There's a lot of hand-wringing about 'How do we stop people from cheating' and not looking at 'What should we be teaching and testing?'" he said."The whole system is set up to incentivize getting good grades. And that's exactly the skill that will be obsolete."