logo
#

Latest news with #artificialgeneralintelligence

The future of AI: Is your business ready to lead responsibly?
The future of AI: Is your business ready to lead responsibly?

Fast Company

time3 hours ago

  • Business
  • Fast Company

The future of AI: Is your business ready to lead responsibly?

The future of AI holds much promise, but it also comes with dangers. As with any invention, the power of whether it becomes beneficial or a detriment depends mostly on who controls it. At Rank Secure, we have delved into AI wholeheartedly because our clients depend on us to remain on the cutting edge of marketing. Last year was trial and error until search engines started setting more rules and guidelines. THE GOOD We are just starting to explore the capability of AI, even though it feels like it has pushed society a lightyear into the future. Its use and capabilities remain at the baseline, with scientists and developers continuing to push the limits. AI is currently being used by most people to do routine tasks. Mundane tasks require objective, analytical thought. The next step is to allow AI to do tasks that require conscious thought, known as artificial general intelligence (AGI). AGI will have the ability to grasp data and information from a variety of sources, rather than just print. This multimodal version understands information derived from text, pictures, audio, video, graphics, and other sources. Multimodal versions will be able to understand someone's tone and facial expressions. AGI will be able to continue learning by teaching itself and learning from its experiences. AGI has much potential. The first phase is to be able to analyze data gathered from all sources. Eventually, we will need to include multimodal into websites. I am not ready to go there yet at my company because I feel more research and guidelines for proper use are needed. AI'S FUTURE USE The focus on AI or AGI usability centers around common-good systems like medicine and education. My company supports these sectors, as well as industries like e-commerce and service businesses, giving us a first look at trends, emerging tech, and new ways to use it. The key element transcending both common-good industries and consumer industries is personalization. Data is highly personalized, and allowing AI and AGI will pinpoint needs even further than they do today. AI can be used to tailor a student's learning to suit their unique needs, and it already plays a supportive role in diagnosis and treatment planning in medicine. It will likely play a larger role over the next few decades in predicting outcomes. So what does that have to do with my company? As a support partner for dental practices and educational providers, we need to implement personalization into their websites and create a user experience that feels like a real human interaction. THE BAD AI advancement comes with ethical and moral questions, as well as potential logistical and security issues. While AI and AGI may become more human-like, they'll always lack what makes us truly human—a soul. AI doesn't have an internal moral compass. And since it's created by human programmers, it carries human biases. These biases show up in what AI leaves out, and they can affect the decisions it makes. AI could even prompt decisions that prioritize societal outcomes over individual needs—for example, denying treatment to a senior citizen with cancer. So the question becomes: Can humans override AI when its decisions are bad or immoral? Right now, we don't have clear standards in place to guide ethical AI use. That's why business leaders should set boundaries for the ethical use of AI and other new tech. My own principles come down to three questions: Does it offer truth? Does it protect privacy? And does it ensure no one is exploited? A few guidelines we follow include: Be clear about any sign-ups to use specialized tech offered on the website. If it's free, it's free—no hidden subscriptions or surprise credit card authorizations in the fine print. Keep user data private. We don't sell the data we collect. Keep all data secure. Clearly distinguish AI-generated content from real people or sources. This is important when it comes to marketing. Never use AI-generated images of brands or celebrities without permission. AI DOUBLES THE REWARD—AND THE RISK AI can have a positive contribution to society, but risk still exists. We already have some experience with constant technology—for example, cameras on public street corners in many cities that aid police—yet, surveillance also invades privacy. Self-driving vehicles can cut down on DUIs, but they can also be hacked. AI can also expand access to mental health support, but it can't offer real companionship like humans can. We may become more efficient, but we risk losing treasures like community, family, and human connection. In the end, intent determines whether new tech like AI and AGI is beneficial or exploitative. It's a morally neutral tool—how we use it is what matters. At my company, our intent is to help clients make good contributions to their communities through their products and services. Others may use the same tools in more exploitative ways just to make a sale. LOOKING AHEAD Gen Z and Gen Alpha are best positioned to fully realize the potential of AI and AGI. They're growing up with it, and they won't carry the same fears or hesitations as older generations. These generations will ultimately shape how far AI goes and whether it's used for good or for bad. Their moral compass will define its limits. It's time for business leaders to consider paths forward using AI and AGI. Consider including a separate mission statement for using new tech in your company's founding principles and outlining policies that support that mission. Doing so will help guide you in the future through ever-growing changes and present a path for younger generations to follow.

Fear Of AGI Is Driving Harvard And MIT Students To Drop Out
Fear Of AGI Is Driving Harvard And MIT Students To Drop Out

Forbes

time2 days ago

  • Science
  • Forbes

Fear Of AGI Is Driving Harvard And MIT Students To Drop Out

W hen Alice Blair enrolled in the Massachusetts Institute of Technology as a freshman in 2023, she was excited to take computer science courses and meet other people who cared about making sure artificial intelligence is developed in a way that's good for humanity. Now she's taking a permanent leave of absence, terrified that the emergence of 'artificial general intelligence,' a hypothetical AI that can perform a variety of tasks as well as people, could doom the human race. 'I was concerned I might not be alive to graduate because of AGI,' said Blair, who is from Berkeley, California. 'I think in a large majority of the scenarios, because of the way we are working towards AGI, we get human extinction.' She's lined up a contract gig as a technical writer at the Center for AI Safety, a nonprofit focused on AI safety research, where she helps with newsletters and research papers. Blair doesn't plan to head back to MIT. 'I predict that my future lies out in the real world,' she said. Blair's not the only student afraid of the potentially devastating impact that AI will have on the future of humanity if it becomes sentient and decides that people are more trouble than they're worth. 'Extinction-level' risk is possible given how fast AI is being developed, according to a 2024 U.S. Department of State-commissioned report. Efforts to build AI with safeguards to prevent this from happening have exploded in the last few years, both from billionaire-funded nonprofits like the Center for AI Safety and companies like Anthropic. A lot of researchers disagree with that premise—'human extinction seems to be very very unlikely,' New York University professor emeritus Gary Marcus, who studies the intersection of psychology and AI, told Forbes . 'But working on AI safety is noble, and very little current work has provided answers.' Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career.' Nikola Jurković Physics and computer science major Adam Kaufman left Harvard University last fall to work full-time at Redwood Research, a nonprofit examining deceptive AI systems that could act against human interests. 'I'm quite worried about the risks and think that the most important thing to work on is mitigating them,' said Kaufman. 'Somewhat more selfishly, I just think it's really interesting. I work with the smartest people I've ever met on super important problems.' He's not alone. His brother, roommate and girlfriend have also taken leave from Harvard for similar reasons. The three of them currently work for OpenAI. Other students are terrified of AGI, but less because it could destroy the human race and more because it could wreck their career before it's even begun. Half of 326 Harvard students surveyed by the school's undergraduate association and AI safety club were worried about AI's impact on their job prospects. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career,' said Nikola Jurković, who graduated from Harvard this May and served as the AI safety group's AGI preparedness lead. 'I personally think AGI is maybe four years away and full automation of the economy is maybe five or six years away.' Already, some companies are hiring fewer interns and recent graduates because AI is capable of doing their tasks. Others are conducting mass layoffs. Anthropic CEO Dario Amodei has warned that AI could eliminate half of all entry-level white-collar jobs and cause unemployment to rise to 20% in the next few years. Students are terrified that this shift will dramatically accelerate when true AGI arrives, though when that might happen is up for debate. OpenAI CEO Sam Altman thinks AGI will be developed before 2029, while Google DeepMind CEO Demis Hassabis predicts that it'll come in the next five to 10 years. Jurković believes it might arrive even sooner: He co-authored a timeline forecast for the AI Futures Project, which also predicts the ability to automate most white-collar jobs by 2027. Others disagree: 'It is extremely unlikely that AGI will come in the next five years,' Marcus said. 'It's just marketing hype to pretend otherwise when so many core problems (like hallucinations and reasoning errors) remain unsolved.' Marcus has noted that throwing more and more data and computing power at AI models has so far failed to produce models sophisticated enough to do many of the same kinds of tasks as humans. While questions remain about when AGI will occur and how valuable a college degree will be in a world upended by human-level artificial intelligence, students are itching to pursue their careers now, before, they worry, it's too late. That's led many to drop out to start their own companies. Since 2023, students have been leaving college to chase the AI gold rush, drawn to the success stories of generations past like Altman and Meta CEO Mark Zuckerberg. Anysphere CEO Michael Truell, now 24, and Mercor CEO Brendan Foody, 22, dropped out of MIT and Georgetown University respectively to pursue their startups. Anysphere was last valued at $9.9 billion, while Mercor has raised over $100 million. With AGI threatening to completely replace human labor, some students see a ticking clock—and a huge opportunity. 'I felt that there's a limited window to act in order to have a hand on the steering wheel,' said Jared Mantell, who was studying economics and computer science at Washington University in St. Louis before dropping out to focus full-time on his startup dashCrystal, which aims to automate design of electronics. The company has raised over $800,000 so far at a valuation of around $20 million. Dropping out means losing out on the benefits of a college degree. According to the Pew Research Center, younger adults with a bachelor's degree or more generally make at least $20,000 more than their peers without one. And in a world where entry-level jobs are being decimated by AI, lacking a degree could limit job prospects for young people even more. Even the cofounder of Y Combinator, a startup accelerator known for funding young founders who have dropped out, thinks students should stay in school. 'Don't drop out of college to start or work for a startup,' Paul Graham posted on X in July. 'There will be other (and probably better) startup opportunities, but you can't get your college years back.' Blair doesn't think that dropping out of school is for everyone. 'It's very difficult and taxing to drop out of college early and get a job,' she said. 'This is something that I would only recommend to extremely resilient individuals who felt they have been adequately prepared to get a job by college already.' More from Forbes Forbes You're Not Imagining It: AI Is Already Taking Tech Jobs By Richard Nieva Forbes How Small Business Can Survive Google's AI Overview By Brandon Kochkodin Forbes How Scrubbing Your Social Media Could Backfire–And Even Hurt Your Job Prospects By Maria Gracia Santillana Linares Forbes Vibe Coding Turned This Swedish AI Unicorn Into The Fastest Growing Software Startup Ever By Iain Martin

DeepMind reveals Genie 3, a world model that could be the key to reaching AGI
DeepMind reveals Genie 3, a world model that could be the key to reaching AGI

TechCrunch

time3 days ago

  • Business
  • TechCrunch

DeepMind reveals Genie 3, a world model that could be the key to reaching AGI

Google DeepMind has revealed Genie 3, its latest foundation world model that the AI lab says presents a crucial stepping stone on the path to artificial general intelligence, or human-like intelligence. 'Genie 3 is the first real-time interactive general purpose world model,' Shlomi Fruchter, a research director at DeepMind, said during a press briefing. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' Genie 3, which is still in research preview and not publicly available, builds on both its predecessor Genie 2 – which can generate new environments for agents – and DeepMind's latest video generation model Veo 3 – which exhibits a deep understanding of physics. Image Credits:Google DeepMind With a simple text prompt, Genie 3 can generate multiple minutes – up from 10 to 20 seconds in Genie 2 – of diverse, interactive, 3D environments at 24 frames per second with a resolution of 720p. The model also features 'promptable world events,' or the ability to use a prompt to change the generated world. Perhaps most importantly, Genie 3's simulations stay physically consistent over time because the model is able to remember what it had previously generated – an emergent capability that DeepMind researchers didn't explicitly program into the model. Fruchter said that while Genie 3 clearly has implications for educational experiences and new generative media like gaming or prototyping creative concepts, its real unlock will manifest in training agents for general purpose tasks, which he said is essential to reaching AGI. 'We think world models are key on the path to AGI, specifically for embodied agents, where simulating real world scenarios is particularly challenging,'Jack Parker-Holder, a research scientist on DeepMind's open-endedness team, said during a briefing. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise on August 7. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Image Credits:Google DeepMind Genie 3 is designed to solve that bottleneck. Like Veo, it doesn't rely on a hard-coded physics engine. Instead, it teaches itself how the world works – how objects move, fall, and interact – by remembering what it has generated and reasoning over long time horizons. 'The model is auto-regressive, meaning it generates one frame at a time,' Fruchter told TechCrunch in a separate interview. 'It has to look back at what was generated before to decide what's going to happen next. That's a key part of the architecture.' That memory creates consistency in its simulated worlds, and that consistency allows it to develop a kind of intuitive grasp of physics, similar to how humans understand that a glass teetering on the edge of a table is about to fall, or that they should duck to avoid a falling object. This ability to simulate coherent, physically plausible environments over time makes Genie 3 much more than a generative model. It becomes an ideal training ground for general-purpose agents. Not only can it generate endless, diverse worlds to explore, but it also has the potential to push agents to their limits – forcing them to adapt, struggle, and learn from their own experience in a way that mirrors how humans learn in the real world. Image Credits:Google DeepMind Currently, the range of actions an agent can take is still limited. For example, the promptable world events allow for a wide range of environmental interventions, but they're not necessarily performed by the agent itself. Similarly, it's still difficult to accurately model complex interactions between multiple independent agents in a shared environment. Genie 3 can also only support a few minutes of continuous interaction, when hours would be necessary for proper training. Still, Genie 3 presents a compelling step forward in teaching agents to go beyond reacting to inputs so they can plan, explore, seek out uncertainty, and improve through trial and error – the kind of self-driven, embodied learning that's key in moving towards general intelligence. 'We haven't really had a Move 37 moment for embodied agents yet, where they can actually take novel actions in the real world,' Parker-Holder said, referring to the legendary moment in the 2016 game of Go between DeepMind's AI agent AlphaGo and world champion Lee Sedol, in which Alpha Go played an unconventional and brilliant move that became symbolic of AI's ability to discover new strategies beyond human understanding. 'But now, we can potentially usher in a new era,' he said.

Microsoft Nears OpenAI Agreement for Ongoing Tech Access
Microsoft Nears OpenAI Agreement for Ongoing Tech Access

Yahoo

time31-07-2025

  • Business
  • Yahoo

Microsoft Nears OpenAI Agreement for Ongoing Tech Access

(Bloomberg) -- Microsoft Corp. is in advanced talks to land a deal that could give it ongoing access to critical OpenAI technology, an agreement that would remove a major obstacle to the startup's efforts to become a for-profit enterprise. The World's Data Center Capital Has Residents Surrounded Budapest's Most Historic Site Gets a Controversial Rebuild An Abandoned Art-Deco Landmark in Buffalo Awaits Revival San Francisco in Talks With Vanderbilt for Downtown Campus Boston's Dumpsters Overflow as Trash-Strike Summer Drags On The companies have discussed new terms that would let Microsoft use OpenAI's latest models and other technology even if the startup decides it has reached its goal of building a more powerful form of AI known as artificial general intelligence (AGI), according to two people familiar with the negotiations. Under the current contract, OpenAI attaining AGI is seen as a major milestone at which point Microsoft would lose some rights to OpenAI technology. Negotiators have been meeting regularly, and an agreement could come together in a matter of weeks, according to three people with knowledge of the situation, who requested anonymity to discuss a private matter. OpenAI Chief Executive Officer Sam Altman and Satya Nadella, his Microsoft counterpart, discussed the restructuring at the Allen & Co. conference in Sun Valley, Idaho, earlier this month, two of the people said. While the tone of the talks has been positive, some of the people cautioned that the deal isn't finalized and could hit new roadblocks. Moreover, OpenAI's restructuring plans face other complications, including regulatory scrutiny and a lawsuit filed by Elon Musk, an early backer who split with the company and accused the startup of defrauding investors about its commitment to its charitable mission. (OpenAI has pushed back at Musk's claims and said the billionaire is trying to slow down the company.) Negotiations over OpenAI's future as a profit-company have dragged on for months. Microsoft, which backed OpenAI with some $13.75 billion and has the right to use its intellectual property, is the biggest holdout among the ChatGPT maker's investors, Bloomberg previously reported. At issue is the size of Microsoft's stake in the newly configured company. The talks have since broadened into a renegotiation of their relationship, with the software maker seeking to avoid suddenly losing access to the startup's technology before the end of the current deal, which expires in 2030. Microsoft and OpenAI declined to comment. A fraying partnership The partnership between the two companies helped inaugurate the AI age. Microsoft built the supercomputer that OpenAI used to develop the language models behind ChatGPT and, in exchange, won the right to bake the technology into its software offerings. The relationship began to fray when the OpenAI board fired (and then rehired) Altman in November 2023, an episode that shook Microsoft's faith in its partner. The rift only widened when the two companies began competing for the same customers — consumers who use their chatbots at home and corporations that have deployed the AI assistants to boost office productivity. Even as executives publicly touted their close ties, OpenAI sought to loosen its dependance on Microsoft, winning permission to build data centers and other AI infrastructure with rival companies. OpenAI is eager to alter its complicated nonprofit structure, in part to secure additional funding to keep building data centers to power its next-generation AI models. SoftBank Group Corp., which has said it would back OpenAI with tens of billions of dollars, has the option to reduce that outlay if OpenAI's restructuring isn't completed by the end of the year. OpenAI wants a larger slice of the revenue currently shared with Microsoft, and has sought adjustments to Microsoft's access to its intellectual property, two of the people said. Microsoft is looking for continued access to OpenAI technology after the current contract expires in 2030. There are range of concerns for OpenAI. The startup wants to ensure its business is well-positioned with whatever share of revenue and equity Microsoft receives in part to guarantee its nonprofit will be well-resourced with a significant stake in OpenAI, one person said. OpenAI also wants the ability to offer customers distinct products built on top of its models even if Microsoft has access to the same technology, the person said. And OpenAI wants to be able to find a way to provide its services to more customers, including government providers, not all of which are on Azure, Microsoft's cloud computing platform, the person said. At the same time, OpenAI seeks to guarantee that Microsoft adheres to strict safety standards when deploying OpenAI's technology, especially as it gets closer to AGI, the person said. The AGI question Reaching agreement on what happens once OpenAI achieves artificial general intelligence has been particularly thorny. It's not clear why the language is in the contract, but it gives OpenAI a built-in way to strike out on its own just as its technology matures. The startup publicly defines AGI as 'highly autonomous systems that outperform humans at most economically valuable work.' The existing contract has separate clauses related to that threshold, which can be triggered by technical or business milestones, according to two people familiar with the matter. OpenAI's board has the right to determine when the company has reached AGI on a technical level. Under that scenario, Microsoft would lose access to technology developed beyond that point, one of the people said. The business milestone would arrive once OpenAI has demonstrated it can reach around $100 billion in total profits for investors including Microsoft — giving it the wherewithal to repay the return Microsoft is entitled to under the existing contract, one person said. In that scenario, Microsoft would lose its rights to OpenAI technology, including products developed before that trigger, another person said. Microsoft has the right to weigh in on the business milestone, but if the two companies end up at odds over the claim, they could wind up in court, two people said. Another provision in the current contract bars Microsoft from pursuing AGI technology itself, some of the people said. Microsoft, for its part, has demonstrated some flexibility in revised contract terms. The company agreed to waive some intellectual property rights related to OpenAI's $6.5 billion acquisition of io, the startup co-founded by iPhone designer Jony Ive, two of the people said. The software giant was less accommodating over OpenAI's proposed acquisition of AI coding startup Windsurf, the people said. That deal fell apart earlier this month, in part because of the tension with Microsoft, Bloomberg reported. Windsurf, which sells coding tools that compete with Microsoft's products, didn't want the tech giant to have access to its intellectual property — a condition that OpenAI was unsuccessful in getting Microsoft's agreement on, people familiar said. Ultimately, Windsurf's co-founders and a small group of staffers agreed to join Alphabet Inc.'s Google in a $2.4 billion deal. In recent weeks, the companies have been negotiating Microsoft's ownership in a restructured OpenAI — with the two sides discussing an equity stake for Microsoft in the low- to mid-30% range, according to a person familiar with the matter. The Financial Times previously reported on the stake talks. But if Microsoft deems the stake and other changes to the contract insufficient, the company is willing to abandon the talks and stick with the current contract terms, another person said. 'If a deal were to be signed, it would take away a hurdle, at least from the investor's perspective,' Kash Rangan, an analyst at Goldman Sachs, said in an interview with Bloomberg Television on Tuesday. 'Both parties have so much to gain from this.' --With assistance from Ed Ludlow. (Story updated with analyst quote in final paragraph.) Burning Man Is Burning Through Cash It's Not Just Tokyo and Kyoto: Tourists Descend on Rural Japan Everyone Loves to Hate Wind Power. Scotland Found a Way to Make It Pay Off Cage-Free Eggs Are Booming in the US, Despite Cost and Trump's Efforts Elon Musk's Empire Is Creaking Under the Strain of Elon Musk ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Elon Musk's xAI is already shockingly massive
Elon Musk's xAI is already shockingly massive

Yahoo

time14-07-2025

  • Business
  • Yahoo

Elon Musk's xAI is already shockingly massive

Elon Musk's xAI is already shockingly massive originally appeared on TheStreet. Tesla () CEO Elon Musk's year has felt more like a high-stakes reality show. The EV giant finally rolled out its Robotaxis, though not without a few speed bumps, while SpaceX's high-profile Starship test failures grabbed headlines. 💵💰💰💵 On top of that, Musk's ongoing public sparring with President Donald Trump continued to escalate. However, amid all this chaos, xAI dropped a major update in Grok‑4, shattering AI performance benchmarks in the process. Who could've guessed that AI might be the saving grace for Musk's business empire? One key major update suggests it potentially could be. Elon Musk is no stranger to making bold bets. But with xAI, it's not just about chasing money; it's about rewriting the rules of AI. xAI's pitch isn't about more responsive chatbots or slicker search tools. It's about building artificial general intelligence, where machines can effectively think, reason, and understand like humans. That's arguably Big Tech's biggest pursuit, potentially making them billions. And Musk says Grok, the company's powerful flagship large language model, is the is designed to go beyond prompts and outputs that you typically get from the likes of ChatGPT, Claude, or Google's Gemini. It's supposed to get context, read sarcasm, and understand nuance. That's a massive edge in a crowded AI market where most models feel virtually the same. But Grok's rollout hasn't been flawless. Its recent controversies, including antisemitic content, led to some backlash and raised concerns over the ethical soundness of AI models. More Tech Stock News: Wall Street pro drops bold price target on Circle stock Bank of America drops shocking call on Super Micro stock Wall Street giant shares bold message on S&P 500's Magnificent 7 Ironically, Musk's vision aims to address that. He's long warned about AI's risks and insists xAI will be looking to develop safe, transparent, and human-centric models. So far, despite Tesla's stumbles and SpaceX stretching its mission goals, backers are betting Musk's AI bet could potentially change things around for him. Elon Musk isn't slowing down, and neither is his appetite for capital. His AI startup, xAI, is in talks to potentially raise money at a staggering $200 billion valuation. That's up a hefty $20 billion from early last year. If the deal lands at the higher end, it would cement xAI as one of the most richly valued private tech companies. In the last two months alone, xAI has pulled in $10 billion in loans and cash, along with another $300 million from a secondary stock would be its third major funding move in under 60 days. The valuation jump is happening despite the recent bumps in the road. As mentioned earlier, xAI's Grok chatbot, which is layered with Musk's social platform X, has sparked controversy for antisemitic posts. Also, the departure of X CEO Linda Yaccarino only added to the turbulence. However, investors seem to be brushing off the drama, with the AI startup's momentum continuing to impress. In March, xAI acquired X in an all-stock deal that valued the combined firm at a whopping $113 billion. Plus, if the current fundraising push succeeds, that figure could swell past the $245 billion mark. Fueling that optimism is Musk's track record and his enviable network. Sources say Saudi Arabia's Public Investment Fund could play a major role in the new round, reinforcing xAI's global attraction. For Musk, it's another step toward turning xAI into a bellwether in the AI space. For everyone else, it's a clear signal that his diversified tech empire will continue challenging the biggest names in Silicon Musk's xAI is already shockingly massive first appeared on TheStreet on Jul 14, 2025 This story was originally reported by TheStreet on Jul 14, 2025, where it first appeared.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store