logo
Sam Altman teases GPT-5, asks it to recommend the 'most thought-provoking' TV show about AI

Sam Altman teases GPT-5, asks it to recommend the 'most thought-provoking' TV show about AI

OpenAI CEO Sam Altman shared a screenshot of what appeared to be GPT-5 on Sunday.
ChatGPT users and OpenAI's competitors have long anticipated the release of this new iteration.
It is expected to take on more agentic tasks and have multimodal capabilities.
Altman posted a seemingly innocuous comment on X praising the animated sci-fi show "Pantheon." The show is a cult favorite in tech circles and tackles themes like artificial general intelligence.
In response, one X user asked if GPT-5 also recommends the show. Altman responded with a screenshot and said, "turns out yes!"
turns out yes! pic.twitter.com/yVsZXKSmKR
— Sam Altman (@sama) August 3, 2025
It is one of the first public glimpses of GPT-5, which is expected to be more powerful than earlier models, feature a larger context window, be able to take on more agentic tasks, and have multimodal capabilities.
According to the screenshot, some things will remain the same, however, like ChatGPT's love of the em dash.
OpenAI is under pressure to unveil a flashy new model as competitors like Google Deepmind, Meta, xAI, and Anthropic continue to nip at its heels.
The screenshot shows that GPT-5 is capable, at the very least,of accurately synthesizing information from the internet. The bot said Pantheon has a "100% critic rating on Rotten Tomatoes" and is "cerebral, emotional, and philosophically intense."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots
Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots

Skift

time27 minutes ago

  • Skift

Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots

Airbnb's Brian Chesky is close to OpenAI CEO Sam Altman, but that doesn't mean that Airbnb will necessarily distribute its inventory through Altman's ChatGPT. Airbnb CEO Brian Chesky said during the company's second-quarter earnings call Wednesday that it is still "feeling out" the possibility of working with the big AI chatbots like ChatGPT, which he called "an incredibly compelling product." "We're certainly open to" integrating with major AI Chatbots, which could be a source of lead generation for Airbnb, Chesky said. Unlike which has partnerships with OpenAI, Amazon Web Services and Microsoft and sees generative AI chatbots as a new distribution channel, Airbnb has made no decisions yet about selling stays, Services or Experiences through major AI assistants. AI Chatbots Are

‘Deliciously twisted,' ‘murderously enjoyable': ‘Wednesday' Season 2 reviews surpass Season 1 by double digits
‘Deliciously twisted,' ‘murderously enjoyable': ‘Wednesday' Season 2 reviews surpass Season 1 by double digits

Yahoo

time3 hours ago

  • Yahoo

‘Deliciously twisted,' ‘murderously enjoyable': ‘Wednesday' Season 2 reviews surpass Season 1 by double digits

Wednesday is back after a three-year hiatus, and critics say the ghoulish wait was well worth it. The second season, to be released in two parts on Aug. 6 and Sept. 3, has received mostly glowing reviews, even surpassing the first season's Rotten Tomatoes score by double digits. As of now, the first half of Season 2 stands at 86 percent, compared to just 73 percent for Season 1, which bowed on Nov. 23, 2022. According to critics, the new episodes are "deliciously twisted" and "murderously enjoyable." More from Gold Derby 'Hamilton' to screen in theaters next month for 10th anniversary TV Directors roundtable: 'Adolescence,' 'Last Week Tonight with John Oliver,' 'The Pitt,' 'Super/Man: The Christopher Reeve Story' Emmy nominee Jenna Ortega returns in season 2 as Wednesday Addams, a goth girl with psychic powers, as she navigates fresh foes and a creepy mystery at Nevermore Academy. The Netflix show is created by Alfred Gough and Miles Millar, based on Charles Addams' iconic characters, and is produced and directed by Tim Burton. The sprawling cast includes Isaac Ordonez as Pugsley Addams, Luis Guzmán as Gomez Addams, and Catherine Zeta-Jones as Morticia Addams, plus newbies like Steve Buscemi as Principal Barry Dort, Christopher Lloyd as Professor Orloff, Joanna Lumley as Grandmama Hester Frump, and Lady Gaga as Rosaline Rotwood. Vicky Jessop (London Evening Standard) raves, "Tim Burton's latest work is still deliciously twisted, and goes down like a spoonful of poisoned sugar. Something this addictive certainly isn't good for us, but we'll keep coming back for more." "A near three-year delay would have killed off a lesser show," begins Ed Power (Daily Telegraph UK), "but Wednesday's murderously enjoyable second season proves worth the wait." Aramide Tinubu (Variety) agrees with that sentiment, writing, "Full of zing and depth, the season remains just as thrilling as the first." While Liz Shannon Miller (Consequence) notes that the two-part release strategy "ends up curbing the show's momentum," Sarah Dempster (Guardian) proclaims that Ortega's Wednesday has enough charisma to "power a thousand hearses. Not that she'd appreciate our enthusiasm." But not everyone is equally amused. After all, what would a show set at a school be without its bullies? Nick Schager (The Daily Beast) harrumphs, "Oozing with plot and attitude but dreadfully short on inspiration, it continues to feel like a training-wheels Burton rehash that's only fit for viewers under the age of 13." And Josh Bel (Boston Globe) hisses, "Gough and Millar are still beholden to their serialized mystery plotting, so there are multiple threads about hooded killers, secret lairs, and surprise villain comebacks. It's even more tiresome the second time around." Season 2 delves deeper into the Addams family mythology, with Burton teasing, "We've grounded that element of the story so that we could identify real family issues that happen in real life." The new episodes also explore the mother-daughter dynamic between Wednesday and Morticia, and now Morticia's mother. Wednesday Season 1 received 12 Emmy nominations in 2023 and won four: Best Main Title Theme Music, Best Contemporary Costumes for a Series, Best Production Design for a Contemporary Program, and Best Contemporary Makeup (Non-Prosthetic). Season 2 will be eligible at the 2026 Emmys. Best of Gold Derby 'Australian Survivor vs. The World' premiere date and cast photos: 'King' George Mladenov, Cirie Fields, Parvati Shallow … 'Five new life forms from distant planets': Everything to know about 'Alien: Earth' as new trailer drops Everything to know about 'The Pitt' Season 2, including the departure of Tracy Ifeachor's Dr. Collins Click here to read the full article. Solve the daily Crossword

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

WIRED

time3 hours ago

  • WIRED

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store