Latest news with #Altman

Business Insider
6 hours ago
- Business
- Business Insider
Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model
AI has surged to the top of the diplomatic agenda in the past couple of years. And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed. At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said. This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said. To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it. Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical. "I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year. Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing."
Yahoo
2 days ago
- Business
- Yahoo
OpenAI Can Stop Pretending
OpenAI is a strange company for strange times. Valued at $300 billion—roughly the same as seven Fords or one and a half PepsiCos—the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status. When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is 'safe' and 'benefits all of humanity.' There wasn't supposed to be any pressure—or desire, really—to make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investors—the types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit side's control. At the time, it had released no consumer products and capped how much money its investors could make. Then came ChatGPT. OpenAI's leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking 'off a growth curve like nothing we have ever seen,' as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbot's release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and products—for shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others. [Read: OpenAI's ambitions just became crystal clear] Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldn't pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue 'charitable initiatives'—and it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure. Resistance came as swiftly as the new funding. Elon Musk—a co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altman—wrote on X that OpenAI 'was funded as an open source, nonprofit, but has become a closed source, profit-maximizer.' He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Musk's lawsuit. OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure 'puts us on the best path to' build a technology 'that could become one of the most powerful and beneficial tools in human history.' (The Atlantic entered into a corporate partnership with OpenAI in 2024.) Yet OpenAI's pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgets—a forthcoming 'family of devices' developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altman's hands. Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcana—these things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial 'general' intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himself—and so OpenAI's 2019 structure, which gave the nonprofit final say over the for-profit's actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors. 'OpenAI's nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company,' Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Musk's lawsuit, arguing that a large part of OpenAI's success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal. The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAI's original structure would better help 'prevent a super intelligent AI from ever wanting to take over.' Hinton is one of the Nobel laureates who has publicly opposed the tech company's for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and California—where the company's nonprofit was incorporated and where the company is headquartered, respectively—to closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI 'that safely benefits humanity, unconstrained by a need to generate financial return,' so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory. Read: 'We're definitely going to build a bunker before we release AGI' In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated 'public benefit' (in this case, building 'safe' and 'beneficial' AI as outlined in OpenAI's founding mission). In its December announcement, OpenAI described the restructure as 'the next step in our mission.' But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs aren't necessarily an effective way to bring about public good. 'They are not great enforcement tools,' he said—they can 'nudge' a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAI's main competitors, are also public-benefit corporations.) OpenAI's proposed conversion also raised a whole other issue—a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAI's plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future. Regulators, it turned out, were already watching. Three days after OpenAI's December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Musk's lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. California's attorney general, Rob Bonta, was reviewing the restructure, as well. This ultimately led OpenAI to change plans. 'We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,' Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC. The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAI's closest corporate partner, has not yet agreed to the new structure. One could be forgiven for wondering what all the drama is for. Amid tension over OpenAI's corporate structure, the organization's corporate development hasn't so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI 'agent' that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAI's ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAI's investors and improving people's lives are not necessarily mutually exclusive. The greater issue is this: There is no universal definition for 'safe' or 'beneficial' AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path 'to continue to make rapid, safe progress and to put great AI in the hands of everyone.' But everyone, in this case, has to trust OpenAI's definition of safe progress. The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit board—which then and now had 'control' over the for-profit subsidiary—removed Altman from his position as CEO. But the company's employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, 'control' on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAI's separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is 'continuing to work through the specific governance mandate in consultation with relevant stakeholders,' he said. Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBank's billions of dollars in investment. A top SoftBank executive has said 'nothing has really changed' with OpenAI's restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that 'SoftBank pulled back the curtain on OpenAI's corporate theater and said the quiet part out loud. OpenAI's recent 'restructuring' proposal is nothing but window dressing.' Lessig, the lawyer who represented the former OpenAI employees, told me that 'it's outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it.' Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that 'regulatory intervention by governments will be critical to mitigate the risks' of powerful AI. But earlier this month, only a few days after writing to his employees and investors that 'as AI accelerates, our commitment to safety grows stronger,' he told the Senate something else: Too much regulation would be 'disastrous' for America's AI industry. Perhaps—but it might also be in the best interests of humanity. Article originally published at The Atlantic


Atlantic
2 days ago
- Business
- Atlantic
OpenAI Can Stop Pretending
OpenAI is a strange company for strange times. Valued at $300 billion—roughly the same as seven Fords or one and a half PepsiCos—the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status. When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is 'safe' and 'benefits all of humanity.' There wasn't supposed to be any pressure—or desire, really—to make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investors—the types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit side's control. At the time, it had released no consumer products and capped how much money its investors could make. Then came ChatGPT. OpenAI's leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking 'off a growth curve like nothing we have ever seen,' as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbot's release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and products—for shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others. Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldn't pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue 'charitable initiatives'—and it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure. Resistance came as swiftly as the new funding. Elon Musk—a co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altman— wrote on X that OpenAI 'was funded as an open source, nonprofit, but has become a closed source, profit-maximizer.' He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Musk's lawsuit. OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure 'puts us on the best path to' build a technology 'that could become one of the most powerful and beneficial tools in human history.' (The Atlantic entered into a corporate partnership with OpenAI in 2024.) Yet OpenAI's pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgets—a forthcoming ' family of devices ' developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altman's hands. Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcana—these things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial 'general' intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himself—and so OpenAI's 2019 structure, which gave the nonprofit final say over the for-profit's actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors. 'OpenAI's nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company,' Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Musk's lawsuit, arguing that a large part of OpenAI's success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal. The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAI's original structure would better help 'prevent a super intelligent AI from ever wanting to take over.' Hinton is one of the Nobel laureates who has publicly opposed the tech company's for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and California—where the company's nonprofit was incorporated and where the company is headquartered, respectively—to closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI 'that safely benefits humanity, unconstrained by a need to generate financial return,' so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory. In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated 'public benefit' (in this case, building 'safe' and 'beneficial' AI as outlined in OpenAI's founding mission). In its December announcement, OpenAI described the restructure as 'the next step in our mission.' But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs aren't necessarily an effective way to bring about public good. 'They are not great enforcement tools,' he said—they can 'nudge' a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAI's main competitors, are also public-benefit corporations.) OpenAI's proposed conversion also raised a whole other issue—a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAI's plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future. Regulators, it turned out, were already watching. Three days after OpenAI's December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Musk's lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. California's attorney general, Rob Bonta, was reviewing the restructure, as well. This ultimately led OpenAI to change plans. 'We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,' Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC. The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAI's closest corporate partner, has not yet agreed to the new structure. One could be forgiven for wondering what all the drama is for. Amid tension over OpenAI's corporate structure, the organization's corporate development hasn't so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI 'agent' that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAI's ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAI's investors and improving people's lives are not necessarily mutually exclusive. The greater issue is this: There is no universal definition for 'safe' or 'beneficial' AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path 'to continue to make rapid, safe progress and to put great AI in the hands of everyone.' But everyone, in this case, has to trust OpenAI's definition of safe progress. The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit board—which then and now had 'control' over the for-profit subsidiary— removed Altman from his position as CEO. But the company's employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, 'control' on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAI's separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is 'continuing to work through the specific governance mandate in consultation with relevant stakeholders,' he said. Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBank's billions of dollars in investment. A top SoftBank executive has said 'nothing has really changed' with OpenAI's restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that 'SoftBank pulled back the curtain on OpenAI's corporate theater and said the quiet part out loud. OpenAI's recent 'restructuring' proposal is nothing but window dressing.' Lessig, the lawyer who represented the former OpenAI employees, told me that 'it's outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it.' Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that 'regulatory intervention by governments will be critical to mitigate the risks' of powerful AI. But earlier this month, only a few days after writing to his employees and investors that 'as AI accelerates, our commitment to safety grows stronger,' he told the Senate something else: Too much regulation would be 'disastrous' for America's AI industry. Perhaps—but it might also be in the best interests of humanity.
Yahoo
2 days ago
- Business
- Yahoo
Two of Silicon Valley's Most Prestigious Venture Capital Firms Just Plowed $135 Million Into Sam Altman's Crypto Project. What Do They See That Wall Street Doesn't?
Sam Altman of OpenAI is also the co-founder of an identity management project called World Network. World Network is looking to use biometric hardware to help combat the rise of artificial intelligence (AI) bots online. As part of the project, World Network also employs a transactional layer known as Worldcoin -- a cryptocurrency available to investors. 10 stocks we like better than Worldcoin › The first half of 2025 has been anything but predictable in the stock market. It seems like ages ago that concerns around Chinese start-up DeepSeek and the intensifying competitive landscape in artificial intelligence (AI) sent shockwaves across the capital markets. As investors know all too well, just about the only things economists have been talking about for almost two months now are tariffs. In the wake of all this uncertainty in the equity market, some astute investors have turned elsewhere. Specifically, the cryptocurrency market has started to attract some new life. Although Bitcoin seems poised to reach new all-time highs, venture capital (VC) juggernauts Andreessen Horowitz (A16Z) and Bain Capital just jointly plowed $135 million into a relatively new token called Worldcoin (CRYPTO: WLD) -- an initiative that was co-founded by Sam Altman, the chief executive officer of OpenAI. Let's explore what Worldcoin is and assess why two of Silicon Valley's most prestigious VC firms just invested in the fledgling crypto project during this period of pronounced market uncertainty. Worldcoin is a cryptocurrency token that runs on the Ethereum blockchain. It serves as the transactional foundation for a project known as World Network. To put it simply, Altman and his co-founders are seeking to build a community (the World Network) that helps solve a use case known as "proof of human." Essentially, by becoming a member of the World Network, each user is then granted a unique identifier known as their custom World ID. In an effort to combat the same user signing up twice or having the network comprised by artificial intelligence (AI) bots, World IDs are verified using a technology called iris biometrics (eye scanning). Before joining The Motley Fool, I spent a decade working in investment banking as well as leading various financial projects at technology start-ups. Anecdotally speaking, I've seen VC investors become intrigued by ideas that may come across as off the beaten path. I see World Network as no different. On the surface, World Network parallels something out of an Orwellian novel. But digging a little deeper into its purpose, I can understand the need for solving the proof of human (PoH) use case. In its initial whitepaper, World Network outlines certain functions (such as voting online) that are currently difficult to perfect due to potential breaches of security protocols. To me, there is an opportunity here for World Network to touch a number of important addressable markets, such as digital identification, financial fraud, cybersecurity, and how each of these stitch together through the broader AI fabric. There are some reports suggesting that interest is high for World IDs. Hence, there may have been a need for a capital infusion to fund the resources required to fulfill demand. With that said, I can't help but think the recent $135 million purchase of Worldcoin by A16Z and Bain Capital is anything more than a form of hedging. Said a different way, investing in the stock market right now is likely unattractive for institutional investors. Since the venture realm is inherently risky in the first place, I think it makes more sense for funds to double down on existing portfolio opportunities, as opposed to allocating any dry powder to new start-ups with little to no traction or volatile stocks. Broadly speaking, the cryptocurrency landscape is incredibly volatile. Even established cryptocurrencies such as Bitcoin often display meme-like behavior as investors chase momentum for no tangible reason. This can make the price action of cryptocurrencies highly unpredictable. Following news of the investment by A16Z and Bain Capital, Worldcoin rocketed by roughly 40%. Although these gains have retreated a bit, the crypto is still trading at levels that suggest some momentum is still holding (or HODLing). One big concern I have about investing in Worldcoin is how the legal and regulatory environment will affect World Network's ability to scale in the long run. While Altman's prominent position in the broader AI landscape could serve as an asset, I still see the project as more of an uphill battle right now. Although I understand why VCs are intrigued with Worldcoin, the majority of investors don't have the same financial flexibility as these funds. Ultimately, I think investing in Worldcoin is highly speculative (even for crypto!), and I would pass on the opportunity right now. Before you buy stock in Worldcoin, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Worldcoin wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $653,389!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $830,492!* Now, it's worth noting Stock Advisor's total average return is 982% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of May 19, 2025 Adam Spatacco has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Bitcoin and Ethereum. The Motley Fool has a disclosure policy. Two of Silicon Valley's Most Prestigious Venture Capital Firms Just Plowed $135 Million Into Sam Altman's Crypto Project. What Do They See That Wall Street Doesn't? was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


India Today
3 days ago
- Business
- India Today
Elon Musk wanted pause in OpenAI deal in Middle East, but his efforts failed
Elon Musk reportedly tried to block OpenAI from securing a high-profile artificial intelligence infrastructure deal in the Middle East unless his own AI company, xAI, was included — but his efforts failed. The deal in question, which was announced last week, involves OpenAI and several US tech companies partnering with G42, an Emirati AI company, to build a massive data centre complex in Abu Dhabi. The facility, which will be part of a larger initiative called Stargate UAE (under the Project Stargate that was announced in January this year), is expected to become one of the world's largest AI computing hubs. These hubs have servers powered by advanced chips from companies like Nvidia and operated by Oracle, Cisco, and to a report by The Wall Street Journal, Musk had caught wind of the OpenAI deal shortly before US President Donald Trump's tour of the Gulf earlier this month. Learning that OpenAI CEO Sam Altman would be part of the trip and that a deal in the UAE was being finalised, Musk reportedly became irate and demanded his firm xAI be included in the in a call with G42 officials, a firm chaired by the UAE national security adviser Sheikh Tahnoon bin Zayed, the brother of the country's president, Musk allegedly warned that Trump would not approve any such deal unless xAI had a seat at the table. Despite the pressure, the deal moved forward without Musk's involvement. White House officials reviewed the agreement following Musk's intervention but ultimately allowed it to proceed as planned. One official described Musk's concern as being about 'fairness for all AI companies,' though others noted his long-running rivalry with Altman may have played a and Altman co-founded OpenAI in 2015. However, in 2018, Musk parted ways and left. There was apparently a power struggle, which was allegedly caused by a proposal by Musk to merge OpenAI and Tesla. The idea, according to Walter Isaacson's biography of Elon Musk, was rejected by Altman and other board members. After that, Musk walked away, and Altman took the reins as CEO. Since then, Musk and Altman have constantly been in a back and forth. Musk has also sued OpenAI and Altman for betraying the original mission of the company – which was to build open-source AI models with no some years ago, Musk launched his own AI venture, xAI, which has struggled to match OpenAI's success despite raising $6 billion in funding and securing partnerships in the Gulf region, including a recent tunnel-building deal between his Boring Company and Dubai. Under the Stargate UAE project, G42 and its partners will reportedly fund construction of a massive data complex in Abu Dhabi while committing an equivalent investment to US-based AI facilities. The first phase of this arrangement will reportedly go live next annoyance to the OpenAI and UAE deal mirrors his earlier frustration when Trump and Altman unveiled the original Stargate project in the US earlier this year. The project is an ambitious $500 billion AI investment initiative, which is also backed by Oracle, SoftBank and Nvidia. At the time, Musk had criticised the deal's backers and questioned their financial credibility on social In