
OpenAI ChatGPT-5 live blog: Countdown and all the big news as it happens
Now, lining up perfectly with estimated timelines, OpenAI has announced a livestream set to go live today at 10am PT, 1pm ET, 6pm BST.
In typical OpenAI fashion, the company has stayed quiet about what exactly this is, but Sam Altman has claimed that this live stream "will be longer than usual," suggesting we've got a lot of exciting new features to cram into the reveal.
LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025
If this is GPT-5, early testers and insiders say it could be a major leap forward — with faster responses, fewer hallucinations, and even the ability to build apps or software from scratch.
We'll be updating this page throughout the day with everything you need to know:
Whether you're a casual ChatGPT user or a hardcore AI enthusiast, stay tuned — this could be one of the most important AI updates of the year.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
2 hours ago
- Tom's Guide
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. Activating these modes is easy: The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Forbes
2 hours ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.


Forbes
4 hours ago
- Forbes
Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques
In today's column, I provide GPT-5 prompt engineering tips and techniques that will aid in getting the best outcomes when using this newly released generative AI. I'm sure that just about everyone by now knows that OpenAI finally released GPT-5, doing so after a prolonged period of immense and wildly fantastical speculation about what it would be like. Well, now we know what it is (see my in-depth review of GPT-5 at the link here). Bottom line is that GPT-5 is pretty much akin to all the other generative AI and large language models (LLMs) when it comes to doing prompting. The key is that if you want to ensure that GPT-5 works suitably for your needs, you must closely understand how GPT-5 differs from prior OpenAI AI products. GPT-5 has distinctive features and functionality that bring forth new considerations about composing your prompts. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. Prompting Is Still Tried And True The first place to begin when assessing GPT-5 from a prompt engineering perspective is that prompts are still prompts. Boom, drop the mic. I say that somewhat irreverently. Here's the deal. There was prior conjecture that perhaps GPT-5 would turn the world upside down when it came to using prompts. The floated ideas of how GPT-5 might conceivably function were astounding and nearly out of this world ('it will read your mind', 'it will know what you want before you even know', etc.). The truth is now known. GPT-5 is essentially a step-up from ChatGPT and GPT-4, but otherwise you do prompting just like you've done all along. There isn't a new kind of magical way to write prompts. You are still wise to compose prompts as you've been doing since the early days of contemporary generative AI. To clarify, I am emphasizing that you should astutely continue to write clearly worded prompts. Be direct. Don't be tricky. Write prompts that are long enough to articulate your question or task at hand. Be succinct if possible. Definitely don't be overly profuse or attempt to be complicated in whatever your request is. And so on. Those are all golden rules and remain perfectly intact when using GPT-5. I am confident that all the prompt engineering specialized techniques that I've previously covered will generally work appropriately with GPT-5. Some might require a tweak or minor refinement, but otherwise, they are prudent and ready to go (see my list at the link here). Auto-Switching Can Be A Headache We can next consider how to artfully try and accommodate GPT-5 via composing prompts that GPT-5 will efficiently and effectively act on. The biggest aspect that entails both good news and bad news about GPT-5 is that OpenAI decided to include an auto-switcher. This is a doozy. It will require you to potentially rethink some of your prompting since it is quite possible that GPT-5 isn't going to make the right routing decisions on your behalf. Allow me a moment to explain the quandary. It used to be that you would have to choose which of the various OpenAI AI products you wanted to use for a particular situation at hand. There had been an organic expansion of OpenAI's prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI's AI capabilities, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts. GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt. The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen. Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion. It can make your head spin since the answers potentially will vary dramatically. Craziness In Design The average user probably won't realize that all these switcheroo mechanics are happening behind the scenes. I say that because GPT-5 doesn't overtly tell you that it is taking these actions. It just silently does so. I appreciate that the designers apparently assumed that no one would care or want to know what is going on under the hood. The problem is that those who are versed in using AI and are up-to-speed on prompting are being bamboozled by this hidden and secreted behavior. A savvy user can almost immediately sense that something is amiss. Frustratingly, GPT-5 won't let you directly control the auto-switching. You cannot tell the AI to use a particular submodel. You cannot get a straight answer if you ask GPT-5 which submodel it intends to use on your prompt. It is perhaps like trying to get the key to Fort Knox. GPT-5 refuses to play ball. The marketplace has tweeted vociferously that something needs to be done about this lack of candor by GPT-5 regarding the model routing that is occurring. Sam Altman sent out a tweet on X that suggested they are going to be making some changes on this aspect (see his X posting of August 8, 2025). The thing is, we can applaud the desire to have a seamless, unified experience, but it is similar to having an automatic transmission on a car. Some users are fine with an automatic transmission, but other, more seasoned drivers want to know what gear the car is in and be able to select a gear that they think is most suitable for their needs. Prompting GPT-5 For Routing As the bearer of bad news, I should also add that the auto-switching comes with another said-to-be handy internal mechanism that decides how much processing time will be undertaken for your entered prompt. Again, you have no particular say in this. It could be that the prompt gets tons of useful processing time, or maybe the time is shortchanged. You can't especially control this, and the settings are not within your grasp (as an aside, to some degree, if you are a developer and are using the API, you have more leeway in dealing with this; see the OpenAI GPT-5 System Card for the technical details). Let me show you what I've been doing about this exasperating situation. First, here is a mapping of the prior models to the GPT-5 submodels: The GPT-5 submodels are considered successors and depart from the earlier models in various ways. That being said, they still are roughly on par as to the relative strengths and weaknesses that previously prevailed. I will show you what I've come up with to try and sway the GPT-5 auto-switcher. Prompting With Aplomb Suppose I have a prompt that I believe would have worked best on GPT-4o. But I am using GPT-5, thus I am not using GPT-4o, plus OpenAI has indicated that it will sunset the prior models, so you might as well get used to using GPT-5. Darned if you cannot simply tell GPT-5 to use gpt-5-main (i.e., realizing that gpt-5-main is now somewhat comparable to GPT-4o, per my chart above). The AI will either tell you it doesn't function that way or might even imply that it will do as you ask, yet it might do something else. Bow to the power of the grand auto-switcher. This eerily reminds me of The Matrix. Anyway, we need to somehow convince GPT-5 to do what we want, but we must do so with aplomb. Asking straightaway isn't a viable option. The need to sway the AI is our best option at this ugly juncture. In the specific case of my wanting to use gpt-5-main, here is a prompt that I use and seems to do the trick (much of the time): It appears that by emphasizing the nature of what I want GPT-5 to do, it seems possible to sway the direction that the auto-switcher will route my next prompt. Not only will I possibly get the submodel that I think is the best choice for the prompt, observe that I also made a big deal about the depth of reasoning that ought to take place. This potentially helps to kick the AI into giving an allotment of processing time that it, by enigmatic means, would have perhaps inadvertently shortcut (OpenAI refers to processing time as so-called 'thinking time' – an anthropomorphizing of AI that I find to be desperate and despairing). I am not saying this sway-related prompting is a guaranteed result. After trying a bunch of times, it seemed to be working as hoped for. I came up with similar prompts for each of the other GPT-5 submodels. If there is enough interest expressed by readers, I will do a follow-up with those details. Be on the watch for that upcoming coverage. On a related note, I will also soon be covering the official GPT-5 Prompting Guide that OpenAI has posted, along with their Prompt Optimizer Tool. Those are aimed primarily at AI developers and not especially about day-to-day, ordinary prompting in GPT-5. Watch Out That Writing Is Enhanced On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won't discern much of a difference. For a more seasoned user, you are bound to notice that the writing has gotten an upgrade. I suppose it is something like getting used to a third grader and now being conversational with a sixth grader. Or something like that. I use this prompt to get GPT-5 to be closer to the way it was in the GPT-4 series: That seems to get me the kind of results that I used to see. It is not an ironclad method, but it generally works well. I realize that some people are going to scream loudly that I ought not to suggest that users revert to the GPT-4 writing style. We all should accept and relish the GPT-5 writing style. Are we going backwards by asking for GPT-5 to speak like GPT-4? Maybe. I grasp the angst. It's up to you, and I'm not at all saying that everyone should use this prompting tip. Please use it at your personal discretion. Lies And AI Hallucinations OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern. Here is my prompt to help try and further reduce the odds of GPT-5 lying to you: Here is my prompt to help further reduce the odds of GPT-5 incurring a so-called hallucination: My usual caveats apply, namely, these aren't surefire, but they seem to be useful. The crucial motto, as always, still is that if you use generative AI, make sure to remain wary and alert. One other aspect is that you would be shrewd to use both of those prompts so that you can simultaneously try to strike down the lying and the hallucinations. If you only use one of those prompts, the other unresolved side will potentially arise. Try to squelch both. It's your way of steering out of the range of double trouble. Personas Are Coming To The Fore I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. For example, you might tell AI to pretend to be Abraham Lincoln. The AI will respond based on having pattern-matched on the writings of Lincoln and the writings about Lincoln. It is instructive and useful for students and learners. I even showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities. The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. It doesn't take much work to invoke a persona. Here is my overall prompt to get a persona going in GPT-5: Use personas with due caution. I mention this because some people kind of get lost in a conversation where the AI is pretending to be someone. It isn't real. You aren't somehow tapping into the soul of that actual person, dead or alive. Personas are pretenses, so keep a clear head accordingly. Prompt Engineering Still Lives I hope that these important prompting tips and insights will boost your results when using GPT-5. One last comment for now. You might know that some have fervently claimed that prompt engineering is a dying art. No one will need to write prompts anymore. I've discussed in great depth the automated prompting tools that try to do the prompting for you (see my aforementioned list of prompt engineering strategies and tactics). They are good and getting better, but we are still immersed in the handwriting of prompts and will continue down this path for quite a while to come. GPT-5 abundantly reveals that to be the case. A final remark for now. It has been said that Mark Twain made a wry comment that when a newspaper reported him as deceased, he said that the audacious claim was a bit exaggerated. That was smarmily tongue-in-cheek. I would absolutely say the same about prompt engineering. It's here. It isn't disappearing. Keep learning about prompting. You'll be glad that you spent the prized time doing so.