
SoftBank, OpenAI announce Stargate UAE AI data center project
TOKYO -- SoftBank Group announced Thursday the launch of the Stargate UAE artificial intelligence infrastructure development project, beginning with a 200-megawatt data center expected to go live in the Middle Eastern country by 2026.
Construction will be led by G42, an AI development company funded by Microsoft and others. The data center, whose capacity is expected to reach 1 gigawatt in the future, will be operated by ChatGPT developer OpenAI and Oracle.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Asahi Shimbun
14 hours ago
- Asahi Shimbun
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
A keyboard is placed in front of a displayed OpenAI logo in this illustration taken on Feb. 21, 2023. (REUTERS) SAN FRANCISCO--OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.


Japan Today
19 hours ago
- Japan Today
Reddit sues AI giant Anthropic over content use
Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation. The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution. Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. "This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said. According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training. The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months. Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial. In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously." Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform. Those deals have helped lift Reddit's share price since it went public in 2024. Reddit shares closed up more than six percent on Wednesday following news of the lawsuit. Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation. Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry. © 2025 AFP


Japan Today
19 hours ago
- Japan Today
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
FILE PHOTO: A response in Chinese by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo By Anna Tong OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion. © Thomson Reuters 2025.