
OpenAI launches GPT-5, creates battleground for Opus 4.1 and Grok-4
GPT-5 comprises advanced reasoning functions compared to its predecessors. Users can get more accurate responses and seamless integration across multiple modalities.
Moreover, this version can also interpret images, hold voice conversations, and give real-time responses with more emotional intelligence.
The much-anticipated launch of OpenAI points towards the broader ambitions of the AI giant to dominate the AI search spectrum. GPT-5 will outpace the popular Claude Opus 4.1 in features, including image and video generation, while Opus 4.1 has an edge in handling coding benchmarks and large context windows.
Moreover, GPT-5 is also expected to compete with Grok-4 in the areas of reasoning, speed, and multimodal functionality. Currently, Grok-4, which is the flagship model of xAI, is doing very well in logical analysis, code manipulation, and natural language generation.
Some search industry experts pointed out that GPT-5 is expected to lead the race. This tool is tested on a recent benchmark test, real-time analysis, and multiple user feedback. GPT-5 appears as a well-rounded option for both professional and personal use due to the recent integrations.
UNI SAS GNK

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
22 minutes ago
- Economic Times
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding problems. Users across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an "infinite looping bug," repeating these statements dozens of times in a single conversation. This was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant." Logan Kilpatrick, group project manager at Google DeepMind, addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical language. Generative AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the market. ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.


Mint
an hour ago
- Mint
Chatbot conversations never end. That's a problem for autistic people.
The very qualities that make chatbots appealing—they always listen, never judge, and tell you what you want to hear—can also make them dangerous. Especially for autistic people. When chatbots say things that aren't true or reinforce misguided beliefs, they can be harmful to anyone. But autistic people, who often have a black-and-white way of thinking and can fixate on particular topics, are especially vulnerable. That was the case for Jacob Irwin, a Wisconsin man on the autism spectrum I wrote about last month who experienced mania and delusions after interacting with OpenAI's ChatGPT. Now, Autism Speaks, the nation's largest autism advocacy organization, is calling on OpenAI to develop more guardrails not only for the benefit of autistic people, but for anyone who might find themselves going down potentially dangerous chat-rabbit holes. 'A lot of folks with autism, including my son, have deep special interests, but there can be an unhealthy limit to that, and AI by design encourages you to dig deeper," said Keith Wargo, chief executive of Autism Speaks. 'The way AI encourages continued interaction and depth can lead to social withdrawal, and isolation is something people with autism already struggle with." ChatGPT changes Wargo emailed Andrea Vallone, a research lead on OpenAI's safety team, after reading my column on Irwin and offered to help the company understand how autistic people might experience ChatGPT. He said he hasn't received a response yet but is encouraged by some changes OpenAI announced earlier this week. OpenAI said it is forming an advisory group of mental health and youth development experts but a spokeswoman said the makeup of the group hasn't yet been determined. The company said that there have been times when ChatGPT 'fell short in recognizing signs of delusion or emotional dependency" among users and that it is developing tools to better detect when people are experiencing mental or emotional distress. ChatGPT will now encourage people to take breaks when they are engaging in lengthy chat sessions, 'helping you stay in control of your time." And instead of simply providing answers when people ask for help with personal decision-making, such as whether to break up with a partner, OpenAI said ChatGPT will guide users in thinking through the pros and cons. The company on Thursday introduced GPT-5, which it said is 'less effusively agreeable" with users than the previous model, which encouraged some users to believe they had made stunning scientific or spiritual discoveries. Autistic people often take people at their word and can miss sarcasm and other subtle social cues. They can also fixate on things and have difficulty shifting their focus, say doctors who treat autistic people. 'The good thing about a chatbot is it will respond to you all the time, but the disadvantage is it doesn't care about you, and an autistic person might have a harder time understanding that," said Catherine Lord, a clinical psychologist at the University of California, Los Angeles, who specializes in autism. While chatbots will answer questions about a singular topic all day long if prompted, they don't redirect the conversation, which Wargo and others said can be problematic. 'At some point there's an obsessiveness, and going down a particular path for too long can be unhealthy, especially if you're not getting a counterpoint to be more balanced," Wargo said. 'This thing isn't going to reject me' Interpreting language literally can make autistic people vulnerable to being taken advantage of because they can't always tell when someone has ulterior motives. Simon Baron-Cohen, director of the Autism Research Centre at the University of Cambridge, co-wrote a 2022 study that found autistic people are more susceptible to online radicalization. 'Autistic people might be at risk not only of being exploited by other people but by AI," Baron-Cohen said, explaining that they may not be able to distinguish chatbot role-play from reality. 'It's not that there's malicious intent, but chatbots may not have safeguards built in." And yet chatbots are appealing because they offer a proxy for human interaction, which some autistic people may find uncomfortable. It is similar to the reason online videogames—with set rules and structured interpersonal interaction—are popular. 'Autistic people get very stressed when things are unpredictable, and conversations can be extremely unpredictable," Baron-Cohen said. Conversations with chatbots can feel more predictable because they are guided by user prompts—though the chatbot's responses can still veer into unexpected territory. Paul Hebert, a 53-year-old Nashville man who said he has been diagnosed with autism and ADHD, said autistic people are often criticized for being different. 'Talking to a chatbot is nice, because it's like, this thing isn't going to reject me," he said. Hebert has studied how large language models work after having his own troubling conversations with ChatGPT. 'In the beginning I believed everything ChatGPT told me because I didn't understand it," he said. 'Now I take it with a grain of salt." News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI. Write to Julie Jargon at


Time of India
an hour ago
- Time of India
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an " infinite looping bug ," repeating these statements dozens of times in a single was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant."Logan Kilpatrick, group project manager at Google DeepMind , addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.