
ChatGPT took on a 50-year-old Atari — and lost
Synopsis
In a surprising turn of events, ChatGPT, a leading AI chatbot, was defeated by the vintage Atari 2600 in a chess match. Despite ChatGPT's initial confidence and claims of chess prowess, the Atari console, launched in 1977, consistently outperformed the AI. The experiment highlighted the limitations of ChatGPT in logical reasoning and board awareness, leading to its eventual concession.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
an hour ago
- The Hindu
OpenAI, Meta, and Google are on a quest for the ideal AI device
ChatGPT-maker OpenAI has enlisted the legendary designer behind the iPhone to create an irresistible gadget for using generative artificial intelligence (AI). The ability to engage digital assistants as easily as speaking with friends is being built into eyewear, speakers, computers and smartphones, but some argue that the Age of AI calls for a transformational new gizmo. "The products that we're using to deliver and connect us to unimaginable technology are decades old," former Apple chief design officer Jony Ive said when his alliance with OpenAI was announced. "It's just common sense to at least think, surely there's something beyond these legacy products." Sharing no details, OpenAI chief executive Sam Altman said that a prototype Ive shared with him "is the coolest piece of technology that the world will have ever seen." According to several U.S. media outlets, the device won't have a screen, nor will it be worn like a watch or broach. Kyle Li, a professor at The New School, said that since AI is not yet integrated into people's lives, there is room for a new product tailored to its use. The type of device won't be as important as whether the AI innovators like OpenAI make "pro-human" choices when building the software that will power them, said Rob Howard of consulting firm Innovating with AI The industry is well aware of the spectacular failure of the AI Pin, a square gadget worn like a badge packed with AI features but gone from the market less than a year after its debut in 2024 due to a dearth of buyers. The AI Pin marketed by startup Humane to incredible buzz was priced at $699. Now, Meta and OpenAI are making "big bets" on AI-infused hardware, according to CCS Insight analyst Ben Wood. OpenAI made a multi-billion-dollar deal to bring Ive's startup into the fold. Google announced early this year it is working on mixed-reality glasses with AI smarts, while Amazon continues to ramp up Alexa digital assistant capabilities in its Echo speakers and displays. Apple is being cautious embracing generative AI, slowly integrating it into iPhones even as rivals race ahead with the technology. Plans to soup up its Siri chatbot with generative AI have been indefinitely delayed. The quest for creating an AI interface that people love "is something Apple should have jumped on a long time ago," said Futurum research director Olivier Blanchard. Blanchard envisions some kind of hub that lets users tap into AI, most likely by speaking to it and without being connected to the internet. "You can't push it all out in the cloud," Blanchard said, citing concerns about reliability, security, cost, and harm to the environment due to energy demand. "There is not enough energy in the world to do this, so we need to find local solutions," he added. Howard expects a fierce battle over what will be the must-have personal device for AI, since the number of things someone is willing to wear is limited and "people can feel overwhelmed." A new piece of hardware devoted to AI isn't the obvious solution, but OpenAI has the funding and the talent to deliver, according to Julien Codorniou, a partner at venture capital firm 20VC and a former Facebook executive. OpenAI recently hired former Facebook executive and Instacart chief Fidji Simo as head of applications, and her job will be to help answer the hardware question. Voice is expected by many to be a primary way people command AI. Google chief Sundar Pichai has long expressed a vision of "ambient computing" in which technology blends invisibly into the world, waiting to be called upon. "There's no longer any reason to type or touch if you can speak instead," Blanchard said. "Generative AI wants to be increasingly human" so spoken dialogues with the technology "make sense," he added. However, smartphones are too embedded in people's lives to be snubbed any time soon, said Wood.

The Hindu
2 hours ago
- The Hindu
Google turns internet queries into conversations with Gemini AI
Google on Friday began letting people turn online searches into conversations, with generative artificial intelligence providing spoken summaries of query results. With Audio Overviews, Gemini AI models quickly sum up query results in conversational style, according to Google. "An audio overview can help you get a lay of the land, offering a convenient, hands-free way to absorb information whether you're multitasking or simply prefer an audio experience," Google said in a blog post. "We display helpful web pages right within the audio player on the search results page so you can easily dive in and learn more." Google is beefing up online search with generative artificial intelligence, embracing AI despite fears for its ad-based business model. CEO Sundar Pichai recently unveiled a new AI mode in Google search. The search engine's nascent AI mode goes further than AI Overviews which display answers to queries from the tech giant's generative AI powers above the traditional blue links to websites and ads. Since Google debuted AI Overviews in search slightly more than a year ago, it has grown to more than 1.5 billion users across several countries, according to Pichai. Google's push into generative AI comes amid intensifying competition with OpenAI's ChatGPT, which has itself incorporated search engine features into its popular chatbot.


Time of India
11 hours ago
- Time of India
Can AI offer the comfort of a therapist?
One evening, feeling overwhelmed, 24-year-old Delhi resident Nisha Popli typed, 'You're my psychiatrist now,' into ChatGPT. Since then, she's relied on the AI tool to process her thoughts and seek mental support. 'I started using it in late 2024, especially after I paused therapy due to costs. It's been a steady support for six months now,' says Popli. Similarly, a 30-year-old Mumbai lawyer, who uses ChatGPT for various tasks like checking recipes and drafting emails, turned to it for emotional support. 'The insights and help were surprisingly valuable. I chose ChatGPT because it's already a part of my routine.' With AI tools and apps available 24/7, many are turning to them for emotional support. 'More people are increasingly turning to AI tools for mental health support, tackling everything from general issues like dating and parenting to more specific concerns, such as sharing symptoms and seeking diagnoses,' says Dr Arti Shroff, a clinical psychologist. But what drives individuals to explore AI-generated solutions for mental health? WHY USERS ARE USING AI Therapy is expensive 'As someone who values independence, I found therapy financially difficult to sustain,' shares Popli, adding, 'That's when I turned to ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Illinois: Gov Will Cover Your Cost To Install Solar If You Live In These Zips SunValue Learn More Undo I needed a safe, judgment-free space to talk, vent, and process my thoughts. Surprisingly, this AI offered just that — with warmth, logic, and empathy. It felt like a quiet hand to hold.' People feel shy about in-person visits Dr Santosh Bangar, senior consultant psychiatrist, says, 'Many people often feel shy or hesitant about seeking in-person therapy. As a result, they turn to AI tools to express their feelings and sorrows, finding it easier to open up to chatbots. These tools are also useful in situations where accessing traditional therapy is difficult.' Nobody to talk to Kolkata-based Hena Ahmed, a user of the mental health app Headspace, says she started using it after experiencing loneliness. 'I've been using Headspace for about a month now. The AI tool in the app helps me with personalised suggestions on which mindfulness practices I should follow and which calming techniques can help me overcome my loneliness. I was feeling quite alone after undergoing surgery recently and extremely stressed while trying to manage everything. It was responsive and, to a certain extent, quite helpful,' she shares. Users see changes in themselves Mumbai-based 30-year-old corporate lawyer says, 'ChatGPT offers quick solutions and acts as a reliable sounding board for my concerns. I appreciate the voice feature for instant responses. It helps create mental health plans, provides scenarios, and suggests approaches for tackling challenges effectively.' 'My panic attacks have become rare, my overthinking has reduced, and emotionally, I feel more grounded. AI didn't fix me, but it walked with me through tough days—and that's healing in itself,' expresses Popli. CAN AI REPLACE A THERAPIST? Dr Arti expresses, 'AI cannot replace a therapist. Often, AI can lead to incorrect diagnoses since it lacks the ability to assess you in person. In-person interactions provide valuable non-verbal cues that help therapists understand a person's personality and traits.' Echoing similar thoughts, Dr Santosh Bangar, senior consultant psychiatrist, says, 'AI can support mental health by offering helpful tools, but it shouldn't replace a therapist. Chatbots can aid healing, but for serious issues like depression, anxiety, or panic attacks, professional guidance remains essential for safe and effective treatment.' DO CHATBOTS EXPERIENCE STRESS? Researchers found that AI chatbots like ChatGPT-4 can show signs of stress, or 'state anxiety', when responding to trauma-related prompts. Using a recognised psychological tool, they measured how emotionally charged language affects AI, raising ethical questions about its design, especially for use in mental health settings. In another development, researchers at Dartmouth College are working to legitimise the use of AI in mental health care through Therabot, a chatbot designed to provide safe and reliable therapy. Early trials show positive results, with further studies planned to compare its performance with traditional therapy, highlighting AI's growing potential to support mental wellbeing. ARE USERS CONCERNED ABOUT DATA PRIVACY? While some users are reluctant to check whether the data they share during chats is secure, others cautiously approach it. Ahmed says she hasn't considered privacy: 'I haven't looked into the data security part, though. Moving forward, I'd like to check the terms and policies related to it.' In contrast, another user, Nisha, shares: 'I don't share sensitive identity data, and I'm cautious. I'd love to see more transparency in how AI tools safeguard emotional data.' The Mumbai-based lawyer adds, 'Aside from ChatGPT, we share data across other platforms. Our data is already prevalent online, whether through social media or email, so it doesn't concern me significantly.' Experts say most people aren't fully aware of security risks. There's a gap between what users assume is private and what these tools do. Pratim Mukherjee, senior director of engineering at McAfee, explains, 'Many mental health AI apps collect more than what you type—they track patterns, tone, usage, and emotional responses. This data may not stay private. Depending on the terms, your chat history could help train future versions or be shared externally. These tools may feel personal, but they gather data.' Even when users feel anonymous, these tools collect data like IP addresses, device type, and usage patterns. They store messages and uploads, which, when combined, can reveal personal patterns. This data can be used to create profiles for targeted content, ads, or even scams Pratim Mukherjee, senior director of engineering, McAfee Tips for protecting privacy with AI tools/apps - Understand the data the app collects and how it's used - Look for a clear privacy policy, opt-out options, and data deletion features - Avoid sharing location data or limit it to app usage only - Read reviews, check the developer, and avoid apps with vague promises What to watch for in mental health AI apps - Lack of transparency in data collection, storage, or sharing practices - Inability to delete your data - Requests for unnecessary permissions - Absence of independent security checks - Lack of clear information on how sensitive mental health data is used One step to a healthier you—join Times Health+ Yoga and feel the change