logo
Leaked Memo: Anthropic CEO Says the Company Will Pursue Gulf State Investments After All

Leaked Memo: Anthropic CEO Says the Company Will Pursue Gulf State Investments After All

WIRED4 days ago
Jul 21, 2025 8:46 PM 'Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on,' wrote Anthropic CEO Dario Amodei in a note to staff obtained by WIRED. Anthropic CEO Dario Amodei attends the World Economic Forum in Davos, Switzerland, in January 2025. Photograph:Anthropic is planning to seek investment from the United Arab Emirates and Qatar, according to a Slack message CEO Dario Amodei sent to staff Sunday morning, which WIRED obtained.
Weighing the pros and cons, Amodei acknowledged in his note that accepting money from Middle East leaders would likely enrich 'dictators.' 'This is a real downside and I'm not thrilled about it,' he wrote. 'Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on.'
The message comes as AI companies race to secure the massive amounts of capital required to train and develop frontier AI models. In January, OpenAI announced a $500 billion data center project called Stargate with financial backing from MGX, a state-owned Emirati investment firm. Four months later, the company announced it was planning to build a data center in Abu Dhabi, as part of a push to help foreign governments 'build sovereign AI capability in coordination with the US.'
'As an American company at the frontier of AI development, we have always believed the supply chain of frontier AI model development should be on American soil in order to maintain America's lead,' said Anthropic spokesperson Christopher Nulty in a statement. 'As Dario has said before, we believe fundamentally in sharing the benefits of AI and serve the Middle East and regions around the world commercially, in line with our Usage Policy.'
In May, President Donald Trump toured the United Arab Emirates and Saudi Arabia as part of a four-day trip focused on economic investments. A cabal of tech leaders, including Elon Musk, Sam Altman, and Nvidia chief Jensen Huang, joined him for a meeting with the crown prince of Saudi Arabia. Anthropic's leadership was notably absent.
In his memo, Amodei acknowledged that the decision to pursue investments from authoritarian regimes would lead to accusations of hypocrisy. In an essay titled 'Machines of Loving Grace,' Amodei wrote: 'Democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.'
In 2024, Anthropic decided not to accept money from Saudi Arabia, citing national security concerns, per CNBC. The news came as FTX, the failed cryptocurrency exchange, went into bankruptcy proceedings, and its nearly 8 percent stake in Anthropic went up for sale. Ultimately, a majority of those shares went to ATIC Third International Investment, a UAE firm. At the time, the stake was worth about $500 million.
Now, it appears Anthropic is poised to accept Gulf State money—though the company hasn't said whether it has changed its stance on Saudi Arabia in particular. 'There is a truly giant amount of capital in the Middle East, easily $100B or more,' Amodei wrote in the memo. 'If we want to stay on the frontier, we gain a very large benefit from having access to this capital. Without it, it is substantially harder to stay on the frontier.'
By pursuing a 'narrowly scoped, purely financial investment from Gulf countries,' the company hopes to avoid the risks associated with allowing outside investors to gain 'leverage' over the company, the memo says.
'The basis of our opposition to large training clusters in the Middle East, or to shipping H20's to China, is that the 'supply chain' of Al is dangerous to hand to authoritarian governments—since Al is likely to be the most powerful technology in the world, these governments can use it to gain military dominance or to gain leverage over democratic countries,' Amodei wrote in the memo, referring to Nvidia chips.
Still, the CEO admitted investors could gain 'soft power' through the promise of future funding. 'The implicit promise of investing in future rounds can create a situation where they have some soft power, making it a bit harder to resist these things in the future. In fact, I actually am worried that getting the largest possible amounts of investment might be difficult without agreeing to some of these other things,' Amodei writes. 'But l think the right response to this is simply to see how much we can get without agreeing to these things (which I think are likely still many billions), and then hold firm if they ask.'
In a section titled 'Erosion of Standards,' Amodei notes that the reason Anthropic 'vociferously pushed for not allowing big data centers in the Middle East' was because 'without a central authority blocking them, there's a race to the bottom where companies gain a lot of advantage by getting deeper and deeper in bed with the Middle East.'
'Unfortunately, having failed to prevent that dynamic at the collective level, we're now stuck with it as an individual company, and the median position across the other companies appears to be 'outsourcing our largest 5 GW training runs to UAE/Saudi is fine,'' the CEO said, likely referring to the United States' agreement to build a new AI data center in the UAE powered by 5 gigawatts of electricity. 'That puts us at a significant disadvantage, and we need to look for ways to make up some of that disadvantage while remaining less objectionable. I really wish we weren't in this position, but we are.'
In a section titled 'Comms Headaches,' the CEO acknowledged that taking investment from Gulf States would likely lead to public criticism. The 'media / Twitter / the outside world' is 'always looking for hypocrisy, while also being very stupid and therefore having a poor understanding of substantive issues,' Amodei wrote. He added: 'It's perfectly consistent to advocate for a policy of 'No one is allowed to do x,' but then if that policy fails and everyone else does X, to reluctantly do x ourselves.'
'We are also interested in serving the region commercially, which is something I think is actually pure positive as long as we don't build data centers there and as long as we enforce our [acceptable use policy],' he added. 'In fact, it could have important benefits for the world including improving human health, aiding economic development, etc.'
He concluded: 'As with many decisions, this one has downsides, but we believe it's the right one overall.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg details Meta's superintelligence plans
Mark Zuckerberg details Meta's superintelligence plans

Axios

time20 minutes ago

  • Axios

Mark Zuckerberg details Meta's superintelligence plans

Shengjia Zhao — formerly of OpenAI — will be chief scientist at Meta's new Superintelligence Lab, Mark Zuckerberg announced on Threads on Friday. Why it matters: The company is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others. What they're saying: " In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," Zuckerberg wrote on Threads, presumably meaning former Scale CEO and founder Alexandr Wang. Catch-up quick: In addition to individual pay packages reportedly worth up to hundreds of millions of dollars per person in some cases, the company is investing $14.3 billion to take a 49% stake in Scale AI and hire its CEO Alexandr Wang. The company has been poaching talent from across the industry, nabbing key folks from Apple, OpenAI and Ilya Sutskever's Safe Superintelligence. From Apple, Meta grabbed AI experts Mark Lee and Tom Gunter, after hiring their boss Ruoming Pang, former head of Apple's LLM team, Bloomberg reported. Meta also hired Tianhe Yu, Cosmo Du and Weiyue Wang, three of the engineers that worked on the Gemini model that achieved gold medal performance at last week's International Mathematical Olympiad, right after the results were announced, per The Information. Between the lines: Hiring talent is just one part of the equation, of course.

Think your ChatGPT therapy sessions are private? Think again.
Think your ChatGPT therapy sessions are private? Think again.

Fast Company

time20 minutes ago

  • Fast Company

Think your ChatGPT therapy sessions are private? Think again.

If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

CNET

time20 minutes ago

  • CNET

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." Be careful what you tell AI about yourself For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store