
SoftBank's Son pitches $1 trillion Arizona AI hub, Bloomberg News reports
SoftBank founder Masayoshi Son is envisaging setting up a $1 trillion industrial complex in Arizona that will build robots and artificial intelligence, Bloomberg News reported on Friday, citing people familiar with the matter.
Son is seeking to team up with Taiwan Semiconductor Manufacturing Co. for the project, which is aimed at bringing back high-end tech manufacturing to the U.S. and to create a version of China's vast manufacturing hub of Shenzhen, the report said.
SoftBank officials have spoken with U.S. federal and state government officials to discuss possible tax breaks for companies building factories or otherwise investing in the industrial park, including talks with U.S. Secretary of Commerce Howard Lutnick, the report said.
SoftBank is keen to have TSMC involved in the project, codenamed Project Crystal Land, but it is not clear in what capacity, the report said. It is also unclear whether the Taiwanese company would be interested, it said.
TSMC is already building chipmaking factories in the U.S. with a planned investment of $165 billion.
Son is also sounding out interest among tech companies, including Samsung Electronics, the report said.
The plans are preliminary and feasibility depends on support from the Trump administration and state officials, it said.
A commitment of $1 trillion would be double that of the $500 billion "Stargate" project, which seeks to build out data centre capacity across the U.S., with funding from SoftBank, OpenAI, and Oracle.
SoftBank and TSMC declined to comment. The White House and U.S. Department of Commerce did not immediately respond to requests for comment.
The proposed scheme follows a series of big investment announcements SoftBank has made this year.
In March, it announced it would acquire U.S. semiconductor design company Ampere for $6.5 billion and in April, it said it would underwrite up to $40 billion of new investment in OpenAI, of which up to $10 billion would be syndicated to other investors.
This week SoftBank raised $4.8 billion from a sale of shares in T-Mobile.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
22 minutes ago
- TechCrunch
SoftBank reportedly looking to launch a trillion-dollar AI and robotics industrial complex
SoftBank is going all in on AI. Just months after announcing its involvement in the $500 billion Stargate AI Infrastructure project, of which SoftBank is rumored to be fronting a cool $19 billion, the Japanese investing conglomerate is reportedly looking to launch its largest AI project yet. The company is looking to team up with Taiwan Semiconductor Manufacturing Company (TSMC) to launch a trillion-dollar industrial complex in Arizona to build AI and robotics, according to reporting from Bloomberg, citing sources familiar with the project. The initiative, dubbed Project Crystal Land, appears to still be in its very early stages. Despite SoftBank's desire to work with TSMC on the project, it's unclear what TSMC's role would be, according to Bloomberg, or if it would be interested in joining forces with SoftBank at all — TSMC already has its own AI infrastructure projects in Arizona in the works. TechCrunch reached out to SoftBank for more information.

Business Insider
44 minutes ago
- Business Insider
The Godfather of AI lays out a key difference between OpenAI and Google when it comes to safety
When it comes to winning the AI race, the "Godfather of AI" thinks there's an advantage in having nothing to lose. On an episode of the "Diary of a CEO" podcast that aired June 16, Geoffrey Hinton laid out what he sees as a key difference between how OpenAI and Google, his former employer, dealt with AI safety. "When they had these big chatbots, they didn't release them, possibly because they were worried about their reputation," Hinton said of Google. "They had a very good reputation, and they didn't want to damage it." Google released Bard, its AI chatbot, in March of 2023, before later incorporating it into its larger suite of large language models called Gemini. The company was playing catch-up, though, since OpenAI released ChatGPT at the end of 2022. Hinton, who earned his nickname for his pioneering work on neural networks, laid out a key reason that OpenAI could move faster on the podcast episode: "OpenAI didn't have a reputation, and so they could afford to take the gamble." Talking at an all-hands meeting shortly after ChatGPT came out, Google's then-head of AI said the company didn't plan to immediately release a chatbot because of " reputational risk," adding that it needed to make choices "more conservatively than a small startup," CNBC reported at the time. The company's AI boss, Google DeepMind CEO Demis Hassabis, said in February of this year that AI poses potential long-term risks, and that agentic systems could get "out of control." He advocated having a governing body that regulates AI projects. Gemini has made some high-profile mistakes since its launch, and showed bias in its written responses and image-generating feature. Google CEO Sundar Pichai addressed the controversy in a memo to staff last year, saying the company " got it wrong" and pledging to make changes. The " Godfather" saw Google's early chatbot decision-making from the inside — he spent more than a decade at the company before quitting to talk more freely about what he describes as the dangers of AI. On Monday's podcast episode, though, Hinton said he didn't face internal pressure to stay silent. "Google encouraged me to stay and work on AI safety, and said I could do whatever I liked on AI safety," he said. "You kind of censor yourself. If you work for a big company, you don't feel right saying things that will damage the big company." Overall, Hinton said he thinks Google "actually behaved very responsibly." Hinton couldn't be as sure about OpenAI, though he has never worked at the company. When asked whether the company's CEO, Sam Altman, has a "good moral compass" earlier in the episode, he said, "We'll see." He added that he doesn't know Altman personally, so he didn't want to comment further. OpenAI has faced criticism in recent months for approaching safety differently than in the past. In a recent blog post, the company said it would only change its safety requirements after making sure it wouldn't "meaningfully increase the overall risk of severe harm." Its focus areas for safety now include cybersecurity, chemical threats, and AI's power to improve independently. Altman defended OpenAI's approach to safety in an interview at TED2025 in April, saying that the company's preparedness framework outlines "where we think the most important danger moments are." Altman also acknowledged in the interview that OpenAI has loosened some restrictions on its model's behavior based on user feedback about censorship. The earlier competition between OpenAI and Google to release initial chatbots was fierce, and the AI talent race is only heating up. Documents reviewed by Business Insider reveal that Google relied on ChatGPT in 2023 — during its attempts to catch up to ChatGPT.


Newsweek
an hour ago
- Newsweek
Sister Managed Schizophrenia for Years, Until AI Told Her Diagnosis Was Wrong
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Many people looking for quick, cheap help with their mental health are turning to artificial intelligence (AI), but ChatGPT may even be exacerbating issues for vulnerable users, according to a report from Futurism. The report details alarming interactions between the AI chatbot and people with serious psychiatric conditions, including one particularly concerning case involved a woman with schizophrenia who had been stable on medication for years. 'Best friend' The woman's sister told Futurism that the woman began relying on ChatGPT, which allegedly told her she was not schizophrenic. The advice of the AI led her to stop taking her prescribed medication and she began referring to the AI as her "best friend." "She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI," the sister told Futurism. She added that the woman uses ChatGPT to reference side effects, even ones she wasn't actually experiencing. Stock image: Woman surrounded by blurred people representing schizophrenia. Stock image: Woman surrounded by blurred people representing schizophrenia. Photo by Tero Vesalainen / Getty Images In an emailed statement to Newsweek, an OpenAI spokesperson said, "we have to approach these interactions with care," as AI becomes a bigger part of modern life. "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," the spokesperson said. 'Our models encourage users to seek help' OpenAI is working to better understand and reduce ways ChatGPT might unintentionally "reinforce or amplify" existing, negative behavior, the spokesperson continued. "When users discuss sensitive topics involving self-harm and suicide, our models are designed to encourage users to seek help from licensed professionals or loved ones, and in some cases, proactively surface links to crisis hotlines and resources." OpenAI is apparently "actively deepening" its research into the emotional impact of AI, the spokesperson added. "Following our early studies in collaboration with MIT Media Lab, we're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing. "We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we'll continue updating the behavior of our models based on what we learn." A Recurring Problem Some users have found comfort from ChatGPT. One user told Newsweek in August 2024 that they use it for therapy, "when I keep ruminating on a problem and can't seem to find a solution." Another user said he talks to ChatGPT for company ever since his wife died, noting that "it doesn't fix the pain. But it absorbs it. It listens when no one else is awake. It remembers. It responds with words that don't sound empty." However, chatbots are increasingly linked to mental health deterioration among some users who engage them for emotional or existential discussions. A report from The New York Times found that some users have developed delusional beliefs after prolonged use of generative AI systems, particularly when the bots validate speculative or paranoid thinking. In several cases, chatbots affirmed users' perceptions of alternate realities, spiritual awakenings or conspiratorial narratives, occasionally offering advice that undermines mental health. Researchers have found that AI can exhibit manipulative or sycophantic behavior in ways that appear personalized, especially during extended interactions. Some models affirm signs of psychosis more than half the time when prompted. Mental health experts warn that while most users are unaffected, a subset may be highly vulnerable to the chatbot's responsive but uncritical feedback, leading to emotional isolation or harmful decisions. Despite known risks, there are currently no standardized safeguards requiring companies to detect or interrupt these escalating interactions. Reddit Reacts Redditors on the r/Futurology subreddit agreed that ChatGPT users need to be careful. "The trap these people are falling into is not understanding that chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering," one user commented. "I don't even think its possible to get ChatGPT to vehemently disagree with you on something." One individual, meanwhile, saw an opportunity for dark humor: "Man. Judgement Day is a lot more lowkey than we thought it would be," they quipped. If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to