logo
Watch water pour over New York train

Watch water pour over New York train

CNN5 days ago
We process your data to deliver content or advertisements and measure the delivery of such content or advertisements to extract insights about our website. We share this information with our partners on the basis of consent. You may exercise your right to consent, based on a specific purpose below or at a partner level in the link under each purpose. Some vendors may process your data based on their legitimate interests, which does not require your consent. You cannot object to tracking technologies placed to ensure security, prevent fraud, fix errors, or deliver and present advertising and content, and precise geolocation data and active scanning of device characteristics for identification may be used to support this purpose. This exception does not apply to targeted advertising. These choices will be signaled to our vendors participating in the Transparency and Consent Framework. The choices you make regarding the purposes and vendors listed in this notice are saved and stored locally on your device for a maximum duration of 1 year.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

iSAM Securities Launches Parallax: High-Transparency Risk Share Model
iSAM Securities Launches Parallax: High-Transparency Risk Share Model

Yahoo

timean hour ago

  • Yahoo

iSAM Securities Launches Parallax: High-Transparency Risk Share Model

LONDON, August 07, 2025--(BUSINESS WIRE)--iSAM Securities has announced the launch of Parallax, a proprietary risk share model designed to help brokers unlock additional value from their client flow through transparent, performance-aligned risk sharing. Developed in-house by iSAM Securities' experienced trading, quant, and development teams, Parallax enables brokers to share in both the risk and reward of internalised trading activity without investing in costly risk infrastructure. Paired with iSAM Securities' existing institutional-grade pricing built in-house, Parallax offers clients a structured path to revenue diversification. Chris Twort, Head of Trading at iSAM Securities commented, "We have designed Parallax in response to client demand for greater transparency and stronger collaboration in the typical risk share model. Many brokers are left in the dark when it comes to how their flow is performing. With Parallax, this is at the forefront of what we do, providing clients with daily visibility of performance, so they always know exactly how much they're earning and why. We believe this level of transparency has been missing from existing risk share programs on the market, and we strive to help our clients overcome this." Parallax introduces a new standard to risk shares, built around institutional-grade pricing, low-latency execution, daily visibility of performance, and clear, pre-agreed payout structures. The introduction of Parallax further builds out iSAM Securities' all-round offering with the group's comprehensive risk management tool, Radar, providing detailed analytics on brokers' book performance in real-time. Parallax is now available to brokers globally, with tailored commercial agreements based on flow characteristics and business models. Register your interest in Parallax here. About iSAM Securities iSAM Securities¹, regulated by the FCA, SFC, and CIMA registered, is a leading algorithmic trading firm and trusted electronic market maker, providing liquidity, cutting-edge proprietary technology, prime services, and real-time risk analytics to institutional clients and trading venues globally. For further information, please visit ¹ iSAM Securities (UK) Limited, iSAM Securities (EU) Limited, iSAM Securities (HK) Limited, iSAM Securities (Global) Limited, iSAM Securities Limited and iSAM Securities (USA) Inc. are together "iSAM Securities". View source version on Contacts Media Contact: Molly Sullivanmarketing@

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

WIRED

time3 hours ago

  • WIRED

A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT

Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store