Latest news with #BenNimmo


India Today
4 days ago
- Business
- India Today
OpenAI claims China keeps using ChatGPT for misinformation operations against rest of the world
OpenAI has raised concerns over the growing misuse of its AI tools, especially ChatGPT, by groups linked to the Chinese government for spreading misinformation and conducting covert influence operations around the world. In a report released on Thursday, the company said it had disrupted multiple such activities over the past three months, banning accounts involved in these efforts.'We're seeing a wider variety of covert operations coming out of China, using increasingly diverse tactics,' said Ben Nimmo, lead investigator on OpenAI's intelligence team. The company found four separate campaigns that were likely backed by Chinese groups, each with distinct goals but all using AI in ways that violated OpenAI's usage such campaign, nicknamed Sneer Review, reportedly used ChatGPT to post short comments and generate fake discussions on platforms like TikTok, X, Reddit, and Facebook. The topics apparently ranged from criticism of a Taiwan-based video game to mixed opinions on the closure of the US Agency for International Development (USAID). What made the effort stand out was its attempt to simulate organic online conversations by generating both initial posts and follow-up comments, creating a false sense of another case, OpenAI found that ChatGPT had been used to write performance reviews of the influence operation itself -- essentially internal documents outlining how the campaign was conducted. 'The behaviours we saw online closely matched the processes described in these reports,' the company said. The AI was also used to generate comments about US political matters, including criticism of US President Donald Trump's trade operations involved using ChatGPT to assist with cyber activities, such as modifying scripts, configuring systems, and building tools for brute-forcing passwords. OpenAI also discovered attempts to use the AI model for intelligence gathering, where fake accounts posed as journalists or analysts to interact with real users and collect information. In one instance, AI-generated content was used in communication related to a US Senator's correspondence, although OpenAI couldn't confirm whether it was actually ChatGPT's launch in late 2022, concerns have mounted over the potential for generative AI to aid misinformation. OpenAI is currently one of the most valuable private tech companies globally, and has recently secured funding that valued the firm at $300 billion.

Engadget
5 days ago
- Politics
- Engadget
Foreign propagandists continue using ChatGPT in influence campaigns
Chinese propaganda and social engineering operations have been using ChatGPT to create posts, comments and drive engagement at home and abroad. OpenAI said it has recently disrupted four Chinese covert influence operations that were using its tool to generate social media posts and replies on platforms including TikTok, Facebook, Reddit and X. The comments generated revolved around several topics from US politics to a Taiwanese video game where players fight the Chinese Communist Party. ChatGPT was used to create social media posts that both supported and decried different hot button issues to stir up misleading political discourse. Ben Nimmo, principal investigator at OpenAI told NPR , "what we're seeing from China is a growing range of covert operations using a growing range of tactics." While OpenAI claimed it also disrupted a handful of operations it believes originated in Russia, Iran and North Korea, Nimmo elaborated on the Chinese operations saying they "targeted many different countries and topics [...] some of them combined elements of influence operations, social engineering, surveillance." This is far from the first time this has occurred. In 2023, researchers from cybersecurity firm Mandiant found that AI-generated content has been used in politically motivated online influence campaigns in numerous instances since 2019. In 2024, OpenAI published a blog post outlining its efforts to disrupt five state-affiliated operations across China, Iran and North Korea that were using OpenAI models for malicious intent. These applications included debugging code, generating scripts and creating content for use in phishing campaigns. That same year, OpenAI said it disrupted an Iranian operation that was using ChatGPT to create longform political articles about US elections that were then posted on fake news sites posing as both conservative and progressive outlets. The operation was also creating comments to post on X and Instagram through fake accounts, again espousing opposing points of view. "We didn't generally see these operations getting more engagement because of their use of AI," Nimmo told NPR . "For these operations, better tools don't necessarily mean better outcomes." This offers little comfort. As generative AI gets cheaper and smarter , it stands to reason that its ability to generate content en masse will make influence campaigns like these easier and more affordable to build, even if their efficacy remains unchanged.
Yahoo
21-02-2025
- Business
- Yahoo
OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance
OpenAI has banned the accounts of a group of Chinese users who had attempted to use ChatGPT to debug and edit code for an AI social media surveillance tool, the company said Friday. The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities. "This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom." According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. See for yourself — The Yodel is the go-to source for daily news, entertainment and feel-good stories. By signing up, you agree to our Terms and Privacy Policy. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China. "Assessing the impact of this activity would require inputs from multiple stakeholders, including operators of any open-source models who can shed a light on this activity," OpenAI said of the operation's efforts to use ChatGPT to edit code for the AI social media surveillance tool. Separately, OpenAI said it recently banned an account that used ChatGPT to generate social media posts critical of Cai Xia, a Chinese political scientist and dissident who lives in the US in exile. The same group also used the chatbot to generate articles in Spanish critical of the US. These articles were published by "mainstream" news organizations in Latin America and often attributed to either an individual or a Chinese company.
Yahoo
21-02-2025
- Politics
- Yahoo
China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report
Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI. The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta. In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement. That same actor also used the ChatGPT service to generate long-form Spanish news articles that "denigrated" the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company. China, Iran And Russia Condemned By Dissidents At Un Watchdog's Geneva Summit During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI's Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it. Read On The Fox News App OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives. "Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles," Nimmo added that threat actors sometimes give OpenAI a glimpse of what they're doing in other parts of the internet because of how they use their models. "This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves," he continued. What Is Artificial Intelligence (Ai)? The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian influence operations (IOs). These two operations have been reported as separate efforts. "The discovery of a potential overlap between these operations - albeit small and isolated - raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks," the threat report states. In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance-baiting network, also known as "pig butchering," across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a "newly stood up scam compound in Cambodia." What Is Chinese Ai Startup Deepseek? Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders. OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses. The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers). OpenAI stresses that their investigations also benefit greatly from the work shared by peers. "We know that threat actors will keep testing our defenses. We're determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends," OpenAI stated in the article source: China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report


Fox News
21-02-2025
- Business
- Fox News
China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report
Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI. The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta. In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement. That same actor also used the ChatGPT service to generate long-form Spanish news articles that "denigrated" the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company. During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI's Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it. OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives. "Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles," Nimmo added that threat actors sometimes give OpenAI a glimpse of what they're doing in other parts of the internet because of how they use their models. "This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves," he continued. The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian IOs (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software, and text. These two operations have been reported as separate efforts. "The discovery of a potential overlap between these operations - albeit small and isolated - raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks," the threat report states. In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance baiting network, also known as "pig butchering," across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a "newly stood up scam compound in Cambodia. Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders. OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses. The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers). OpenAI stresses that their investigations also benefit greatly from the work shared by peers. "We know that threat actors will keep testing our defenses. We're determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends," OpenAI stated in the report.