logo
OpenAI to buy iPhone designer Jony Ive's startup for $6.5 billion. Here's what it means for S.F.

OpenAI to buy iPhone designer Jony Ive's startup for $6.5 billion. Here's what it means for S.F.

OpenAI CEO Sam Altman and famed Apple designer Jony Ive have joined forces to build the 'iPhone of AI,' with OpenAI agreeing to buy Ive's new startup Io for $6.5 billion. The purchase would be OpenAI's biggest deal ever, uniting two of the most prominent names in tech and giving San Francisco another economic win in the nascent industry.
The team will create a 'new family of products' for artificial intelligence, Altman and Ive said in a joint statement Wednesday. The deal is subject to regulatory approval.
Ive, who left Apple in 2019, and his San Francisco design studio LoveFrom began working with Altman two years ago. Altman has been frequently seen in the city's Jackson Square neighborhood where LoveFrom's headquarters is located.
'A collaboration built upon friendship, curiosity and shared values quickly grew in ambition. Tentative ideas and explorations evolved into tangible designs,' the statement said. Io was co-founded by Scott Cannon, Evans Hankey and Tang Tan. Ive and his team 'will assume deep design and creative responsibilities across OpenAI.'
Ive declined further comment on the deal.
The startup employs 55 people, who will all become OpenAI employees, but will remain based in Jackson Square. Io is located in that neighborhood, close to LoveFrom's headquarters, in a building Ive purchased, the Chronicle has learned.
Ive's multi million dollar purchases of a string of adjacent aging buildings centered around the 800 block of Montgomery Street in the ritzy Jackson Square neighborhood have made headlines in recent years. He has launched an extensive renovation effort to revitalize them as LoveFrom's home base, which is ongoing.
Altman has also made major real estate moves. He leased close to 1 million square feet across three separate buildings in the city's Mission Bay neighborhood where he moved OpenAI.
'Jony Ive and OpenAI's partnership is a big vote of confidence in San Francisco,' Mayor Daniel Lurie said in a social media post. 'Our city has a strong history of creativity and innovation, a legacy that is alive and well today. This is the place to create the future.'
Sarah Dennis Phillips, executive director of the city's Office of Economic and Workforce Development, said that the deal between Altman and Ive "perfectly illustrates what San Francisco is — a unique ecosystem where ambitious ideas are discussed, challenged, tested and formed into action."
"San Francisco has been Jony Ive's creative home. San Francisco birthed OpenAI and supported Sam Altman's ongoing innovation," she said. "It's where these incredible individuals, their creativity and their innovation can crash together to create something new."
The Emerson Collective, a social change organization founded by philanthropist and entrepreneur Laurene Powell Jobs, and Thrive Capital, a venture capital firm led by Joshua Kushner — the brother of President Trump's son-in-law Jared Kushner — have been confirmed as investors in Io.
Sources have confirmed to the Chronicle that Powell Jobs has also purchased real estate in Jackson Square. The Chronicle reported last year that Thrive Capital paid roughly $9 million for 451 Pacific Ave., a near century old brick-and-timber office building that is located a block over from LoveFrom's headquarters.
"Jackson Square, where this collaboration is taking shape, has quietly become a magnet for visionary creators," said Claude Imbault, vice president of Planning and Economic Development for placemaking organization Downtown SF Partnership. "It's both the birthplace of modern San Francisco and a hub for building the future."
Sources with insight into OpenAI's acquisition of Ive's startup said that more hires are likely and that Ive's team is 'doubling down' on San Francisco.
In a video featuring Altman and Ive walking past sweeping views of San Francisco's Transamerica Pyramid and a foggy Golden Gate Bridge, the two men lauded each other — Ive crediting Altman for 'humility' and Altman calling Ive the 'deepest thinker.' They meet at Cafe Zoetrope in North Beach.
'San Francisco has been like a mythical place in American history,' said Altman. 'It is the city I most associate with the sort of leading edge of culture and technology.'
The video feels like a love letter to San Francisco.
'This city has enabled and been the place of the creation of so much,' said Ive. 'I feel I owe this city such an enormous debt of gratitude.'
'This is another success story for San Francisco. This shows that we continue to be the best place in the world to bring world-changing ideas to life,' said Supervisor Danny Sauter, who represents Jackson Square. He said that Ive and his company 'have been positive partners in the neighborhood with open and transparent communication and we hope and expect that to continue.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How a Proposed Moratorium on State AI Rules Could Affect You
How a Proposed Moratorium on State AI Rules Could Affect You

CNET

timean hour ago

  • CNET

How a Proposed Moratorium on State AI Rules Could Affect You

Congress would slam the brakes on any state rules and laws around artificial intelligence under language in the big federal spending bill now before the US Senate. Proponents say such a move would help the industry grow and compete with AI developers in China, while critics say it would limit the power of anyone but the federal government to put guardrails around a technology that is quickly becoming a major part of our lives. The proposal says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. In May, the House of Representatives narrowly voted to approve the full budget bill, which also includes the extension of the 2017 federal tax cuts and cuts to services like Medicaid and SNAP. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since OpenAI's ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Not all AI companies are backing a moratorium, however. In a New York Times op-ed, Anthropic CEO Dario Amodei called it "far too blunt an instrument," saying the federal government should create transparency standards for AI companies instead. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level too. I think we need both." Several states have already started regulating AI The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, a partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. What a moratorium on state AI regulation means AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good, but "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Concerns from companies, both the developers that create AI systems and the "deployers" who use them in interactions with consumers, often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact." A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said. A proposed 10-year moratorium on state AI laws is now in the hands of the Senate, where the Senate Commerce, Science and Transportation Committee has already held hearings on artificial intelligence. Nathan Howard/Bloomberg via Getty Images The AI debate moves to the Senate With the bill now in the hands of the US Senate -- and with more people becoming aware of the proposal -- debate over the moratorium has picked up. Senators of both parties, including Republican Sens. Josh Hawley and Marsha Blackburn, have voiced their concerns. In the Senate, the measure could be stripped out of the budget because of the so-called Byrd Rule, which prohibits anything that is not a budgetary issue from being included in a reconciliation bill. Whatever bill the Senate approves will then also have to be accepted by the House, where it passed by the narrowest of margins. Even some House members who voted for the bill have said they don't like the moratorium, namely Rep. Marjorie Taylor Greene, a key ally of President Trump. The Georgia Republican posted on X this week that she is "adamantly OPPOSED" to the moratorium and that she would not vote for the bill with the moratorium included. At the state level, a letter signed by 40 state attorneys general -- of both parties -- called for Congress to reject the moratorium and instead create that broader regulatory system. "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI," they wrote.

Microsoft's stock hits fresh record, rallying despite drop in broader market
Microsoft's stock hits fresh record, rallying despite drop in broader market

CNBC

timean hour ago

  • CNBC

Microsoft's stock hits fresh record, rallying despite drop in broader market

On a down day for the market, Microsoft reached a record high for the first time in 11 months. Shares of the software giant rose 0.8% to close at $467.68. Microsoft has once again reclaimed the title of world's largest company by market cap, with a valuation of $3.48 trillion. Nvidia has a market cap of $3.42 trillion, and Apple is valued at $3 trillion. Microsoft last recorded a record close in July 2024. The stock is now up 11% for the year, while the Nasdaq is flat. Tech stocks broadly dropped on Thursday, led by a plunge in Tesla, as CEO Elon Musk and President Donald Trump escalated their public beef. Musk, who was leading the Trump Administration's Department of Government Efficiency (DOGE) until last week, has slammed the Trump-backed spending bill making its way through Congress, a spat that has turned personal. But Microsoft investors appear to be tuning out that noise. Microsoft CEO Satya Nadella focused on his company's tight relationship with artificial intelligence startup OpenAI in an interview with Bloomberg, some portions of which were published on Thursday. "Why would any one of us want to go upset that?" he told Bloomberg. Nadella told analysts in January that OpenAI had made a large new commitment with Microsoft's Azure cloud. In total, Microsoft has invested nearly $14 billion in OpenAI.

Anthropic co-founder on cutting access to Windsurf: 'It would be odd for us to sell Claude to OpenAI'
Anthropic co-founder on cutting access to Windsurf: 'It would be odd for us to sell Claude to OpenAI'

Yahoo

time2 hours ago

  • Yahoo

Anthropic co-founder on cutting access to Windsurf: 'It would be odd for us to sell Claude to OpenAI'

Anthropic Co-founder and Chief Science Officer Jared Kaplan said his company cut Windsurf's direct access to Anthropic's Claude AI models largely because of rumors and reports that OpenAI, its largest competitor, is acquiring the AI coding assistant. "We really are just trying to enable our customers who are going to sustainably be working with us in the future," said Kaplan during an onstage interview Thursday with TechCrunch at TC Sessions: AI 2025. "I think it would be odd for us to be selling Claude to OpenAI," Kaplan said. The comment comes just a few weeks after Bloomberg reported that OpenAI was acquiring Windsurf for $3 billion. Earlier this week, Windsurf said that Anthropic cut its direct access to Claude 3.5 Sonnet and Claude 3.7 Sonnet, two of the more popular AI models for coding, forcing the startup to find third-party computing providers on relatively short notice. Windsurf said it was disappointed in Anthropic's decision and that it might cause short-term instability for users trying to access Claude via Windsurf. Windsurf declined to comment on Kaplan's remarks, and an OpenAI spokesperson did not immediately respond to TechCrunch's request. The companies have not confirmed the acquisition rumors. Part of the reason Anthropic cut Windsurf's access to Claude, according to Kaplan, is because the company is quite computing-constrained today. Anthropic would like to reserve its computing for what Kaplan characterized as "lasting partnerships." However, Kaplan said the company hopes to greatly increase the availability of models it can offer users and developers in the coming months. He added that Anthropic has just started to unlock capacity on a new computing cluster from its partner, Amazon, which he says is "really big and continues to scale." As Anthropic pulls away from Windsurf, Kaplan said he's collaborating with other customers building AI coding tools, such as Cursor — a company Kaplan said Anthropic expects to work with for a long time. Kaplan rejected the idea that Anthropic was in competition with companies like Cursor, which is developing its own AI models. Meanwhile, Kaplan says Anthropic is increasingly focused on developing its own agentic coding products, such as Claude Code, rather than AI chatbot experiences. While companies like OpenAI, Google, and Meta are competing for the most popular AI chatbot platform, Kaplan said the chatbot paradigm was limiting due to its static nature, and that AI agents would in the long run be much more helpful for users.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store