logo
#

Latest news with #CaiXia

OpenAI Bans Chinese Accounts That Used ChatGPT to Create Anti-US Propaganda
OpenAI Bans Chinese Accounts That Used ChatGPT to Create Anti-US Propaganda

Yahoo

time24-02-2025

  • Business
  • Yahoo

OpenAI Bans Chinese Accounts That Used ChatGPT to Create Anti-US Propaganda

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. OpenAI has removed the accounts of several users linked to China, which it says were used to generate propaganda material published in mainstream newspapers in Latin America. In an updated report spotted by Reuters, OpenAI points to a number of incidents where it believes that ChatGPT was used to generate Spanish-language newspaper articles criticizing the US, which were then published in well-known newspapers in Mexico, Peru, and Ecuador. The articles centered on political divisions in the US and current affairs, in particular the topics of drug use and homelessness. The users reportedly prompted ChatGPT to generate the Spanish-language articles in Chinese during mainland Chinese working hours. OpenAI says they used ChatGPT to translate receipts from Latin American newspapers, indicating the articles may well have been paid placements. ChatGPT was also allegedly used by the accounts to generate short-form material, including comments critical of Cai Xia, a well-known Chinese political dissident, which were then posted on X by users claiming to be from the US or India. "This is the first time we've observed a Chinese actor successfully planting long-form articles in mainstream media to target Latin America audiences with anti-US narratives, and the first time this company has appeared linked to deceptive social media activity," OpenAI says. OpenAI says some of the activity is consistent with the covert influence operation known as "Spamouflage," a major Chinese operation spotted on over 50 social media platforms, including Facebook, Instagram, TikTok, Twitter, and Reddit. The campaign, identified by Meta in 2023, targeted users in the US, Taiwan, UK, Australia, and Japan with positive information about China. In May 2024, OpenAI reported that groups based in Russia, China, Iran, and Israel used the company's AI models to generate short comments on social media, as well as translate and proofread text in various languages. For example, a Russian propaganda group known as Bad Grammar used OpenAI's technology to generate fake replies about Ukraine to specific posts on Telegram in English and Russian. But though we've seen international propaganda groups leverage OpenAI's tool before, OpenAI thinks the recent incident is unique due to its targeting of mainstream media, calling this "a previously unreported line of effort, which ran in parallel to more typical social media activity, and may have reached a significantly wider audience."

China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report
China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report

Yahoo

time21-02-2025

  • Politics
  • Yahoo

China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report

Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI. The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta. In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement. That same actor also used the ChatGPT service to generate long-form Spanish news articles that "denigrated" the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company. China, Iran And Russia Condemned By Dissidents At Un Watchdog's Geneva Summit During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI's Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it. Read On The Fox News App OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives. "Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles," Nimmo added that threat actors sometimes give OpenAI a glimpse of what they're doing in other parts of the internet because of how they use their models. "This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves," he continued. What Is Artificial Intelligence (Ai)? The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian influence operations (IOs). These two operations have been reported as separate efforts. "The discovery of a potential overlap between these operations - albeit small and isolated - raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks," the threat report states. In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance-baiting network, also known as "pig butchering," across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a "newly stood up scam compound in Cambodia." What Is Chinese Ai Startup Deepseek? Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders. OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses. The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers). OpenAI stresses that their investigations also benefit greatly from the work shared by peers. "We know that threat actors will keep testing our defenses. We're determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends," OpenAI stated in the article source: China, Iran-based threat actors have found new ways to use American AI models for covert influence: Report

ChatGPT-generated op-eds appeared in mainstream Latin American media outlets
ChatGPT-generated op-eds appeared in mainstream Latin American media outlets

Yahoo

time21-02-2025

  • Politics
  • Yahoo

ChatGPT-generated op-eds appeared in mainstream Latin American media outlets

Chinese propagandists used ChatGPT to write and translate op-eds that they successfully planted in Spanish-language news outlets last fall, researchers said Friday. A report published by OpenAI, the company that owns the artificial intelligence chatbot ChatGPT, found that a pro-China campaign had used the program to produce 18 articles and op-eds that were published across eight Spanish-language media platforms. Four of the outlets are Peruvian, two Ecuadorian, one Mexican and one Spanish. None of the eight outlets responded to requests for comment. The articles don't mention China, but are broadly critical of the U.S., highlighting problems like homelessness, racism, crime and income inequality. Often the people using ChatGPT asked it to translate and expand the articles from existing ones originally written in Chinese, the researchers found. 'The actor generated these articles by asking our models to translate and expand publicly available Chinese-language articles,' the researchers wrote in the report. 'This is the first time we've observed a likely Chinese influence actor successfully publishing articles in mainstream outlets in Latin America,' they added. The researchers' findings came as part of OpenAI's quarterly threat report that also documented efforts by a variety of bad actors to use its tools in a variety of malicious ways, including generating fake articles about the Ghanaian presidential election and to perpetuate romance scams. One of the news outlets indicated an article was 'sponsored,' but the rest were presented as authentic opinions. Most were published in October 2024, in the lead-up to the Asia-Pacific Economic Cooperation conference, held in Peru. Some of the articles have the byline of a company, Jilin Yousen Culture Communication Co., instead of a person. A profile of a company with that name on Chinese search engine Baidu describes it as a multimedia tech and public relations firm. A spokesperson for the Chinese Embassy in Washington, D.C., didn't immediately respond to a request for comment. All of the articles are still live, and many were published the same day that ChatGPT produced them. The people who used the chatbot to generate the articles usually worked during daytime business hours in China, OpenAI found. The accounts that asked ChatGPT to write the op-eds also sometimes had the program write short posts in English that criticized a Chinese dissident, Cai Xia, which were posted by various accounts on X that purported to be people from the U.S. or India. For years, U.S. tech companies have accused pro-Chinese propagandists of using inauthentic accounts on Western social media platforms to target people around the world, including Americans, with messages that align with Beijing's priorities, like promoting the Chinese Communist Party, downplaying allegations of human rights abuses in China, or criticizing the U.S. China has consistently denied any such efforts. Those efforts sometimes involve significant investment but almost always fall flat, with few people interacting with them. The X posts tied to the Latin American op-ed campaign similarly did not appear to get any significant social media engagement, OpenAI said. This article was originally published on

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report
China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

Fox News

time21-02-2025

  • Business
  • Fox News

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI. The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta. In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement. That same actor also used the ChatGPT service to generate long-form Spanish news articles that "denigrated" the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company. During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI's Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it. OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives. "Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles," Nimmo added that threat actors sometimes give OpenAI a glimpse of what they're doing in other parts of the internet because of how they use their models. "This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves," he continued. The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian IOs (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software, and text. These two operations have been reported as separate efforts. "The discovery of a potential overlap between these operations - albeit small and isolated - raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks," the threat report states. In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance baiting network, also known as "pig butchering," across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a "newly stood up scam compound in Cambodia. Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders. OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses. The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers). OpenAI stresses that their investigations also benefit greatly from the work shared by peers. "We know that threat actors will keep testing our defenses. We're determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends," OpenAI stated in the report.

ChatGPT-generated op-eds appeared in mainstream Latin American media outlets
ChatGPT-generated op-eds appeared in mainstream Latin American media outlets

NBC News

time21-02-2025

  • Politics
  • NBC News

ChatGPT-generated op-eds appeared in mainstream Latin American media outlets

Chinese propagandists used ChatGPT to write and translate op-eds that they successfully planted in Spanish-language news outlets last fall, researchers said Friday. A report published by OpenAI, the company that owns the artificial intelligence chatbot ChatGPT, found that a pro-China campaign had used the program to produce 18 articles and op-eds that were published across eight Spanish-language media platforms. Four of the outlets are Peruvian, two Ecuadorian, one Mexican and one Spanish. None of the eight outlets responded to requests for comment. The articles don't mention China, but are broadly critical of the U.S., highlighting problems like homelessness, racism, crime and income inequality. Often the people using ChatGPT asked it to translate and expand the articles from existing ones originally written in Chinese, the researchers found. 'The actor generated these articles by asking our models to translate and expand publicly available Chinese-language articles,' the researchers wrote in the report. 'This is the first time we've observed a likely Chinese influence actor successfully publishing articles in mainstream outlets in Latin America,' they added. The researchers' findings came as part of OpenAI's quarterly threat report that also documented efforts by a variety of bad actors to use its tools in a variety of malicious ways, including generating fake articles about the Ghanaian presidential election and to perpetuate romance scams. One of the news outlets indicated an article was 'sponsored,' but the rest were presented as authentic opinions. Most were published in October 2024, in the lead-up to the Asia-Pacific Economic Cooperation conference, held in Peru. Some of the articles have the byline of a company, Jilin Yousen Culture Communication Co., instead of a person. A profile of a company with that name on Chinese search engine Baidu describes it as a multimedia tech and public relations firm. A spokesperson for the Chinese Embassy in Washington, D.C., didn't immediately respond to a request for comment. All of the articles are still live, and many were published the same day that ChatGPT produced them. The people who used the chatbot to generate the articles usually worked during daytime business hours in China, OpenAI found. The accounts that asked ChatGPT to write the op-eds also sometimes had the program write short posts in English that criticized a Chinese dissident, Cai Xia, which were posted by various accounts on X that purported to be people from the U.S. or India. For years, U.S. tech companies have accused pro-Chinese propagandists of using inauthentic accounts on Western social media platforms to target people around the world, including Americans, with messages that align with Beijing's priorities, like promoting the Chinese Communist Party, downplaying allegations of human rights abuses in China, or criticizing the U.S. China has consistently denied any such efforts. Those efforts sometimes involve significant investment but almost always fall flat, with few people interacting with them. The X posts tied to the Latin American op-ed campaign similarly did not appear to get any significant social media engagement, OpenAI said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store