Musk's xAI in talks to raise $4. 3 billion in equity funding
The xAI logo on a smartphone arranged in New York.
Image: Gabby Jones/Bloomberg
Elon Musk's artificial intelligence startup xAI is in talks to raise $4.3 billion (R77bn) through an equity investment on top of the $5bn it has recently been trying to borrow from debt investors, according to information the company shared with investors who asked not to be identified because it is private.
Musk's company, which is responsible for the AI chatbot Grok, needs the new money, in part, because it has already spent most of what it previously raised, the materials shared with investors indicate.
Between its founding in 2023 and when the debt sale was launched this year, xAI raised $14bn via equity fundraising, according to the materials. But as of March 31, only $4bn of that was left on the company's balance sheet, the information showed.
The new equity infusion is luring investors back into xAI's debt offering, which was being circulated just as Musk's public spat with President Donald Trump was playing out. The company has also offered changes to the debt documents to assuage investors' concerns, people familiar with the matter said. The changes make it more difficult for xAI to shift assets, which protects lenders' collateral, and also set a ceiling on the amount of secured debt it can raise, the people added.
Commitments on the $5bn debt sale are due Tuesday, according to a different person with knowledge of the matter. In addition to the fresh funding, xAI may also get a $650 million rebate from one of its manufacturers that will help the firm cut costs, people familiar with the matter said.
A spokesperson for the company declined to comment, as did a spokesperson for Morgan Stanley, the bank in charge of xAI's debt sale.
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Huge fundraising rounds have become a standard feature of the fiercely competitive artificial intelligence industry, in which the top players are jockeying to secure the expensive computer chips and infrastructure needed to train advanced AI models like Grok and ChatGPT.
Despite the big spending, potential xAI investors have been told that the company's valuation grew to $80bn at the end of the first quarter, up from $51bn at the end of 2024. Investors in previous rounds have included Andreessen Horowitz, Sequoia and VY Capital.
Musk recently decided to merge xAI with his social media company X but the new funds will go toward the AI operations.
BLOOMBERG

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
14 hours ago
- IOL News
Musk's xAI in talks to raise $4. 3 billion in equity funding
The xAI logo on a smartphone arranged in New York. Image: Gabby Jones/Bloomberg Elon Musk's artificial intelligence startup xAI is in talks to raise $4.3 billion (R77bn) through an equity investment on top of the $5bn it has recently been trying to borrow from debt investors, according to information the company shared with investors who asked not to be identified because it is private. Musk's company, which is responsible for the AI chatbot Grok, needs the new money, in part, because it has already spent most of what it previously raised, the materials shared with investors indicate. Between its founding in 2023 and when the debt sale was launched this year, xAI raised $14bn via equity fundraising, according to the materials. But as of March 31, only $4bn of that was left on the company's balance sheet, the information showed. The new equity infusion is luring investors back into xAI's debt offering, which was being circulated just as Musk's public spat with President Donald Trump was playing out. The company has also offered changes to the debt documents to assuage investors' concerns, people familiar with the matter said. The changes make it more difficult for xAI to shift assets, which protects lenders' collateral, and also set a ceiling on the amount of secured debt it can raise, the people added. Commitments on the $5bn debt sale are due Tuesday, according to a different person with knowledge of the matter. In addition to the fresh funding, xAI may also get a $650 million rebate from one of its manufacturers that will help the firm cut costs, people familiar with the matter said. A spokesperson for the company declined to comment, as did a spokesperson for Morgan Stanley, the bank in charge of xAI's debt sale. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Huge fundraising rounds have become a standard feature of the fiercely competitive artificial intelligence industry, in which the top players are jockeying to secure the expensive computer chips and infrastructure needed to train advanced AI models like Grok and ChatGPT. Despite the big spending, potential xAI investors have been told that the company's valuation grew to $80bn at the end of the first quarter, up from $51bn at the end of 2024. Investors in previous rounds have included Andreessen Horowitz, Sequoia and VY Capital. Musk recently decided to merge xAI with his social media company X but the new funds will go toward the AI operations. BLOOMBERG

IOL News
19 hours ago
- IOL News
Meta's AI chatbot is becoming a confessional—and everyone's watching
Conversation's with META's chatbot could be public and users dont know. Image: Supplied A MAN wants to know how to help his friend come out of the closet. An aunt struggles to find the right words to congratulate her niece on her graduation. And one guy wants to know how to ask a girl - 'in Asian' - if she's interested in older men. Ten years ago, they might have discussed those vulnerable questions with friends over brunch, at a dive bar, or in the office of a therapist or clergy member. Today, scores of users are posting their often cringe-making conversations about relationships, identity and spirituality with Meta's AI chatbot to the app's public feed - sometimes seemingly without knowing their musings can be seen by others. Meta launched a stand-alone app for its AI chatbot nearly two months ago with the goal of giving users personalized and conversational answers to any question the could come up with - a service similar to those offered by OpenAI's ChatGPT or Anthropic's Claude. But the app came with a unique feature: a discover field where users could post their personal conversations with Meta AI for the world to see, reflecting the company's larger strategy to embed AI-created content into its social networks. Since the April launch, the app's discover feed has been flooded with users' conversations with Meta AI on personal topics about their lives or their private philosophical questions about the world. As the feature gained more attention, some users appeared to purposely promote comical conversations with Meta AI. Others are publishing AI-generated images about political topics such as Trump in diapers, images of girls in sexual situations and promotions to their businesses. The flurry of personal posts on Meta AI is the latest indication that people are increasingly turning to conversational chatbots to meet their relationship and emotional needs. As users ask the chatbots for advice on matters ranging from their marital problems to financial challenges, privacy advocates warn that users' personal information may end up being used by tech companies in ways they didn't expect or want. 'We've seen a lot of examples of people sending very, very personal information to AI therapist chatbots or saying very intimate things to chatbots in other settings, ' said Calli Schroeder, a senior counsel at the Electronic Privacy Information Center. 'I think many people assume there's some baseline level of confidentiality there. There's not. Everything you submit to an AI system at bare minimum goes to the company that's hosting the AI.' Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Meta spokesman Daniel Roberts said chats with Meta AI are set to private by default and users have to actively tap the share or publish button before it shows up on the app's discover field. Still, the company's share button doesn't explicitly tell users where their conversations with Meta AI will be posted and what other people will be able to see - a fact that appeared to confuse some users about the new app. The discover feed on Meta AI reads like a mixture of users' personal diaries and Google search histories, filled with questions ranging from the mundane to the political and philosophical. In one instance, a husband asked Meta AI in a voice recording about how to grow rice indoors for his 'Filipino wife.' Users asked Meta about Jesus' divinity; how to get picky toddlers to eat food and how to budget while enjoying daily pleasures. The feed is also filled with images created by Meta AI but conceived by users' imaginations, such as one of President Donald Trump eating poop and another of the grim reaper riding a motorcycle. In April, Meta CEO Mark Zuckerberg told podcaster Dwarkesh Patel that one of the main reasons people used Meta AI was to talk through difficult conversations they need to have with people in their lives - a use he thinks will become more compelling as the AI model gets to know its users. 'People use stuff that's valuable for them,' he said. 'If you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong.' There are few regulations pushing tech companies to adopt stricter content or privacy rules for their chatbots. In recent months, a couple of high-profile incidents triggered questions about how tech companies handle personal data, who has access to that data, and how that information could be used to manipulate users. In April, OpenAI announced that ChatGPT would be able to recall old conversations that users did not ask the company to save. On X, CEO Sam Altman said OpenAI was excited about '[AI] systems that get to know you over your life, and become extremely useful and personalized.' The potential pitfalls of that approach became obvious the following month, when OpenAI had to roll back an update to ChatGPT that incorporated more personalization because it made the tool sycophantic and manipulative toward users. Last week, OpenAI's chief operating officer Brad Lightcap said the company intended to keep its privacy commitments to users after plaintiffs in a copyright lawsuit led by the New York Times demanded that OpenAI retain customer data indefinitely. Ultimately, it may be users that push the company to offer more transparency.


Daily Maverick
a day ago
- Daily Maverick
Artificial intelligence — an aid to thought, not a replacement
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over.' When ChatGPT briefly went offline last week, it felt, as journalist and writer Gus Silber put it, 'as if the sun had fallen from the sky'. Speaking on a Jive Media Africa webinar on the subject of 'Machines are writing – do we trust the message?', Silber and other panellists tossed around concepts of 'Uberisation', 'forklifting' and 'outsourcing' to get to grips with AI technology and its ethical pitfalls. Silber noted that in just a few years, AI had morphed from novelty to necessity and is now deeply woven into daily work across media, academia and science communication. Its seductive convenience allows us to 'outsource thinking to a machine', said Silber, while noting both the potential and the perils of doing so. Fellow panellists, science communicator and champion of language equity in science Sibusiso Biyela and Michelle Riedlinger, associate professor in the school of communications at the Queensland University of Technology, agreed, in a discussion peppered with metaphors, to highlight the divisions of labour in the partnership between technology and humans. Introducing the webinar, Jive Media director Robert Inglis said that 'artificial intelligence, particularly generative AI, is reshaping both the practice of research and the craft of science communication. This impact is felt by researchers, by science communicators and by others working at the intersection of science, society and media and especially those who are grappling with how AI tools influence credibility, ethics and public trust.' While many fret over the elimination of jobs and the technological encroachment on the preserve of what it means to be human, Silber readily calls himself a Utopian on the subject, believing 'it's ultimately going to be of good for humanity'. Silber notes that the reach of AI, 'originally a niche technology, has expanded dramatically, driven by advances like fibre, broadband and always-on connectivity. Tools such as ChatGPT now serve as default knowledge engines, sometimes even surpassing Google. Being able to 'outsource a lot of your thinking to, effectively, a machine,' he said, 'tempts users to let AI handle increasingly complex tasks'. In academia and media, some rely heavily on AI-generated content, resulting in a sameness of voice: 'It sounds human, but it sounds human in a very kind of generic and samey way.' While AI offers powerful assistance in tasks like transcription – 'you can transcribe two hours' worth of interviews in five or ten minutes' – the risk is that its convenience leads to 'creative atrophy'. It's 'a real temptation, a kind of 'tyranny of ease', where you can just prompt the AI to write essays or theses. That scares me because it risks giving up your creative energy.' Collaborative use He nevertheless enthuses about the rise of multimodal AI and mentioned tools like Whisper, Notebook LMand Genspark AI, which are already revolutionising research, communication and creative industries. But he draws clear boundaries: 'I draw the line at outsourcing full creative processes to AI.' Instead, he advocates using AI collaboratively, augmenting human thought rather than replacing it. 'We're lucky to live in this creative technical renaissance. We can't go back to how things were before. My advice: explore these tools, break them, have fun and find ways to use them collaboratively. Let machines do the heavy lifting while we focus on human creativity.' Anxieties, however, are pervasive, said Riedlinger. Her research shows that news audiences 'found familiar concerns: misinformation, copyright, elections, job displacement.' But people weren't rejecting AI outright; 85% wanted transparency; visible labels, a kind of 'nutritional label' for AI-generated content.' She said there's a growing 'authenticity infrastructure' emerging, with companies like Adobe working on labelling multimodal content. Audiences want AI to support, not replace, human journalists and science communicators. 'The key is to keep humans in the loop, to ensure creativity, empathy and accountability remain central.' To help navigate this, Riedlinger reached for metaphors. First, she said, contrast 'forklifting versus weightlifting. Forklifting covers repetitive, heavy tasks – transcription, translation, drafting – where AI helps move things efficiently but under human guidance. Weightlifting represents skills that build strength: framing stories, interpreting data, learning audiences. These are areas we risk weakening if we outsource too much to AI.' The second is the 'Uber metaphor'. 'You can make coffee yourself or order it through Uber. It's convenient, but hides labour behind the scenes: the barista, the driver, data centres. Generative AI feels equally magical but isn't free; there are hidden costs in energy use, data scraping and ethical concerns. Before outsourcing, we must consider these unseen consequences. Hallucinations and bias 'In global studies, people increasingly recognise AI's limits: hallucinations, biases in gender, race, geography and class. Some see AI as a calculator, improving over time, but that's misleading. Calculators give fixed answers; generative AI doesn't.' Reaching for yet another metaphor, she said 'it's more like a talking mirror from a fairy tale', generating fluent, tailoredand sometimes flattering responses, but blending truth and invention in a way that can flatten creativity and make unique ideas more generic. 'Authenticity, trust and disclosure are vital. We need consistent labels, audience controland clear public policies.' This, said Riedlinger, will build trust over time. 'Science communicators must reflect on each task: Is this forklifting or weightlifting? Am I calling an Uber for something I should craft myself? Science communication deserves thoughtful tools and thoughtful users. We need to ensure that our publics have authentic interactions. ' The watchwords, when dealing with AI, are: 'Disclose. Collaborate. Stay in the loop as a human. Design for trust.' Picking up on the trust, or mistrust, of the machine, Biyela said 'there's a lot of antagonism around AI, especially with articles not disclosing if they're AI-assisted. When audiences hear something was generated by AI, they often turn away. It becomes less of an achievement if it wasn't really done by a human.' But, he said, 'audiences (and ourselves) need to understand AI's limitations and how it actually works. We call it artificial intelligence, but it's in no way intelligent. It's an automaton that looks like it's thinking, but it's not. It's a clever prediction model using computing power to make it seem like it's thinking for us. But it's not. The thinking is always being done by people. AI never does anything; it's always us. What it produces has been trained to give us what we want.' Biyela emphasises that 'You're the human in the loop' and have to account for every line an LLM is asked to produce. 'If it summarises something you haven't seen, you have to check it. It makes the job easier, but it doesn't perform it.' Caveats aside, Biyela says 'generative AI also offers potential in communicating science in underserved languages, like African languages. Driving AI In his conclusion, Inglis, too, reached for a metaphor to guide how science communicators and other professionals and students should engage with AI: 'We would never jump into a car without having learnt to drive the thing. Now you've got these tools at our disposal and we'll use them, but we've got to be aware of the dangers that using them for the wrong things can bring about in the world.' In short, the panel agreed that in the partnership between AI and people, AI is good at the 'forklifting' work: sorting, calculating, transcribing, processing vast amounts of data quickly, but that humans still carry the mental load: setting priorities, interpreting meaning, understanding context, reading emotions, anticipating unintended consequencesand ultimately taking responsibility for decisions. Inglis further reflected: 'Our work in science communication is to play a part in solving the complex challenges we face and to ensure we do so in ways that build a better future for society and for the planet.' He cited a recent study by Apple, which reveals just how bad large reasoning models are when it comes to deep reasoning, having been found to face a 'complete accuracy collapse beyond certain complexities'. 'This underlines the need for human operators to use these tools as an aid to thinking, not as a replacement for thinking. That grappling with complex ideas is exactly what we're doing with this webinar series – these kinds of answers can't be scraped from the web, they need to be generated and discovered through exploration, conversation, dialogue and skilful engagement. 'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over because it's easier than engaging with tough issues. It's lazy and at this time in the history of our planet, we can't afford to be lazy.' DM