
SoftBank Group swings to profit in first quarter on market enthusiasm for tech stocks
The result will likely be welcomed by investors as SoftBank pursues its biggest spending spree since the launch of its Vision Funds in 2017 and 2019, this time making mammoth investments in artificial intelligence companies.
Investors and analysts are awaiting updates on how these investments will be financed, the timeline for returns to materialise and whether assets will be sold to fund the new projects.
Favourable market conditions for technology companies over the quarter raised valuations of SoftBank's array of startup stakes, potentially fuelling monetisation opportunities that can be directed toward newer investments.
SoftBank's Vision Funds hold $45 billion worth of late-stage companies ready to list soon, up from $36 billion at the end of March.
The group overall logged a net profit of 421.8 billion yen ($2.87 billion) for the April-June quarter.
The result compared with a net loss of 174.3 billion yen for the same period a year earlier and the 127.6 billion yen average profit from three analyst estimates compiled by LSEG.
The Vision Fund unit posted an investment gain of 726.8 billion yen ($4.94 billion), around half of which derived from a surge in the share price of South Korean e-commerce firm Coupang (CPNG.N), opens new tab over the quarter.
SoftBank is leading a $40 billion funding round for ChatGPT maker OpenAI. SoftBank has until the end of the year to fund its $22.5 billion portion, although the remainder has been subscribed, according to a source familiar with the matter.
It is also leading the financing for the Stargate project - a $500 billion scheme to develop data centres in the United States, part of its effort to position itself as the "organiser of the industry," founder Masayoshi Son said in June.
SoftBank has yet to release details on what kinds of returns its financing of the Stargate project could generate. The extent of third-party investment will determine what other financing tools, such as bank loans and debt issuance, it may have to deploy.
In July, SoftBank raised $4.8 billion by selling off a portion of its holding in T-Mobile (TMUS.O), opens new tab.
However SoftBank's performance in exiting from investments and distributing profit has been patchy of late. The Vision Funds have made a cumulative investment gain of just $5 billion out of a total committed capital of $172.2 billion as of the end of June.
($1 = 147.0600 yen)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
21 minutes ago
- Daily Mail
CEO of AI company gets bloodied pig's head in horror package as he's called a 'Clark Kent knockoff'
The CEO of an AI startup in Las Vegas receives a package containing a severed pig's head and threatening letter - believed to stem from his company's use of artificial intelligence. Blake Owens, founder and CEO of Agrippa, an AI-powered platform to connect commercial real estate investors and developers without traditional brokers, received the bloodied pig's head along with the menacing note on July 29. The gruesome parcel was sent to a relative's home, and the message criticized Owens' use of AI - with personal insults that called him a 'Clark Kent knockoff' and ended ominously with: 'And don't get greedy because pigs get fat and hogs get slaughtered.' SCROLL DOWN FOR VIDEO Owens told KLAS: 'Perhaps this person watched too much of The Godfather. 'Needless to say, I still take it very seriously, but don't feel like I'm being truly threatened. It was a message.' The note was signed only with the initial 'M' and appeared to be motivated by a June TV segment that profiled Owens and Agrippa's AI tool, known as 'Marcus', to automate real estate transactions by matching developers with investors and evaluating property bids. The sinister letter also said: 'AI is not going to replace brokers. Clearly you don't understand real estate wasn't built by developers or investors. And it sure as hell wasn't built by tech guys in Lululemon. It was built by brokers. We did it the hard way. No shortcuts, no tech, just people.' Owens said he believed the sender was fearful of being displaced by automation. The businessman said: 'I understand this person is probably just frustrated that business isn't going well for them, and then they see AI replacement stories on top of that. And I just so happen to be someone they can focus their frustration on.' A photo of the package showed the sender was labeled as 'Marcus Agrippa' - a reference to the company's AI system. Owens joked: 'Is this a message that you know your own AI is turning against you? I wasn't quite sure how to interpret it.' Las Vegas PD confirmed it was investigating the incident and classified it as a harassment case. A suspect was yet to be identified. Owens said he did not feel 'genuinely threatened' and would not press charges should the sender be eventually identified. He told KLAS: 'I don't want to punch down on this person; they may be in a tough spot in life. I do see this as an opportunity to show people you don't become a better person by making another man a lesser person.' Owens also addressed potential anxiety surrounding AI's growing presence in the workforce, particularly in fields such as real estate that had historically relied on personal relationships. He said: 'You know, people are scared. They feel displaced and when disruption moves faster than education, fear just fills the gap.' Owens added that Agrippa was not designed to replace humans but it was created to empower professionals through AI. He said: '[Winston Churchill] said to be perfect is to change often. I think a lot of people are afraid of change and what's coming with AI, because it really is a tsunami of change that people are trying to resist. But the more you embrace it, the better you'll do, the more skills that you'll accumulate, more value you'll bring to the table.' Despite the threatening nature of the package, Owens remained committed to encouraging dialogue and told Inman: 'If I knew who this person was, I'd say, "Hey, feel free to reach out to me - maybe not with a package, just send me an email - I'm happy to share whatever education I can on keeping up with AI."' The investigation into the incident remained ongoing.


The Guardian
an hour ago
- The Guardian
OpenAI will not disclose GPT-5's energy use. It could be higher than past models
In mid-2023, if a user asked OpenAI's ChatGPT for a recipe for artichoke pasta or instructions on how to make a ritual offering to the ancient Canaanite deity Moloch, its response might have taken – very roughly – 2 watt-hours, or about as much electricity as an incandescent bulb consumes in 2 minutes. OpenAI released a model on Thursday that will underpin the popular chatbot – GPT-5. Ask that version of the AI for an artichoke recipe, and the same amount of pasta-related text could take several times – even 20 times – that amount of energy, experts say. As it rolled out GPT-5, the company highlighted the model's breakthrough capabilities: its ability to create websites, answer PhD-level science questions, and reason through difficult problems. But experts who have spent the past years working to benchmark the energy and resource usage of AI models say those new powers come at a cost: a response from GPT-5 may take a significantly larger amount of energy than a response from previous versions of ChatGPT. OpenAI, like most of its competitors, has released no official information on the power usage of its models since GPT-3, which came out in 2020. Sam Altman, its CEO, tossed out some numbers on ChatGPT's resource consumption on his blog this June. However, these figures, 0.34 watt-hours and 0.000085 gallons of water per query, do not refer to a specific model and have no supporting documentation. 'A more complex model like GPT-5 consumes more power both during training and during inference. It's also targeted at long thinking … I can safely say that it's going to consume a lot more power than GPT-4,' said Rakesh Kumar, a professor at the University of Illinois, currently working on the energy consumption of computation and AI models. The day GPT-5 was released, researchers at the University of Rhode Island's AI lab found that the model can use up to 40 watt-hours of electricity to generate a medium-length response of about 1,000 tokens, which are the building blocks of text for an AI model and are approximately equivalent to words. A dashboard they put up on Friday indicates GPT-5's average energy consumption for a medium-length response is just over 18 watt-hours, a figure that is higher than all other models they benchmark except for OpenAI's o3 reasoning model, released in April, and R1, made by the Chinese AI firm Deepseek. This is 'significantly more energy than GPT-4o', the previous model from OpenAI, said Nidhal Jegham, a researcher in the group. Eighteen watt-hours would correspond to burning that incandescent bulb for 18 minutes. Given recent reports that ChatGPT handles 2.5bn requests a day, the total consumption of GPT-5 could reach the daily electricity demand of 1.5m US homes. As large as these numbers are, researchers in the field say they align with their broad expectations for GPT-5's energy consumption, given that GPT-5 is believed to be several times larger than OpenAI's previous models. OpenAI has not released the parameter counts – which determine a model's size – for any of its models since GPT-3, which had 175bn parameters. A disclosure this summer from the French AI company Mistral finds a 'strong correlation' between a model's size and its energy consumption, based on Mistral's study of its in-house systems. 'Based on the model size, the amount of resources [used by GPT-5] should be orders of magnitude higher than that for GPT-3,' said Shaolei Ren, a professor at the University of California, Riverside who studies the resource footprint of AI. GPT-4 was widely believed to be 10 times the size of GPT-3. Jegham, Kumar, Ren and others say that GPT-5 is likely to be significantly larger than GPT-4. Leading AI companies like OpenAI believe that extremely large models may be necessary to achieve AGI, that is, an AI system capable of doing humans' jobs. Altman has argued strongly for this view, writing in February: 'It appears that you can spend arbitrary amounts of money and get continuous and predictable gains,' though he said GPT-5 did not surpass human intelligence. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion In its benchmarking study in July, which looked at the power consumption, water usage and carbon emissions for Mistral's Le Chat bot, the startup found a one-to-one relationship between a model's size and its resource consumption, writing: 'A model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens.' Jegham, Kumar and Ren said that while GPT-5's scale is significant, there are probably other factors that will come into play in determining its resource consumption. GPT-5 is deployed on more efficient hardware than some previous models. GPT-5 appears to use a 'mixture-of-experts' architecture, which means that it is streamlined so that not all of its parameters are activated when responding to a query, a construction which will likely cut its energy consumption. On the other hand, GPT-5 is also a reasoning model, and works in video and images as well as text, which likely makes its energy footprint far greater than text-only operations, both Ren and Kumar say – especially as the reasoning mode means that the model will compute for a longer time before responding to a query. 'If you use the reasoning mode, the amount of resources you spend for getting the same answer will likely be several times higher, five to 10,' said Ren. In order to calculate an AI model's resource consumption, the group at the University of Rhode Island multiplied the average time that model takes to respond to a query – be it for a pasta recipe or an offering to Moloch – by the model's average power draw during its operation. Estimating a model's power draw was 'a lot of work', said Abdeltawab Hendawi, a professor of data science at the University of Rhode Island. The group struggled to find information on how different models are deployed within data centers. Their final paper contains estimates for which chips are used for a given model, and how different queries are parceled out between different chips in a datacenter. Altman's June blog post confirmed their findings. The figure he gave for ChatGPT's energy consumption per query, 0.34 watt-hours per query, closely matches what the group found for GPT-4o. Hendawi, Jegham and others in their group said that their findings underscored the need for more transparency from AI companies as they release ever-larger models. 'It's more critical than ever to address AI's true environmental cost,' said Marwan Abdelatti, a professor at URI. 'We call on OpenAI and other developers to use this moment to commit to full transparency by publicly disclosing GPT-5's environmental impact.'


Telegraph
2 hours ago
- Telegraph
Labour develops AI to predict parliamentary rebellions
Labour is developing a computer model to predict future rebellions after Sir Keir Starmer was humbled by his own backbenchers over welfare reforms. The Department for Science, Innovation and Technology is funding an Artificial Intelligence (AI) programme to scour debate records for signs of MPs who will vote against the Government. The move comes amid concern that the Prime Minister is being pushed around by his party following a series of mutinies. Parlex is billed as software that can 'forecast parliamentary reaction' by analysing records of past debates in the Commons. It will allow civil servants to draw up dossiers for Cabinet ministers saying which MPs, including those in their own ranks, are likely to oppose specific policies. A project page on the government website said: 'By analysing years of parliamentary debate contributions from MPs and Peers, Parlex offers insights into how Parliament might react to a new policy if it were debated tomorrow. 'This tool helps policy professionals gauge parliamentary sentiment towards specific issues, determining whether a policy will be well-received or face significant opposition. 'This allows policy teams to understand the political climate and anticipate potential challenges or support for a policy before it is formally proposed and to build a parliamentary handling strategy.' Sir Keir faced the biggest crisis of his premiership in July when more than 120 Labour MPs threatened to revolt against changes to sickness benefits. The Prime Minister was eventually forced to abandon the reforms, which would have saved £5bn a year, after a significant blow to his authority. It was not the first time that he had been humbled by a backbench rebellion. In 2023, while he was leader of the opposition, he was defied by 56 of his MPs who broke the party whip to vote for a ceasefire in Gaza. This month, Sir Keir announced he planned to recognise a Palestinian state after again coming under heavy pressure from backbenchers and the Cabinet. With many Labour MPs sitting on wafer-thin majorities and fearing defeat at the next election, there are expectations that party discipline could break down further. The science ministry announced that it was developing Parlex earlier this year as part of a new suite of AI tools known as Humphrey. It has been named after Sir Humphrey Appleby, the permanent secretary at the 'Department of Administrative Affairs' in the 1980s TV satire Yes, Minister. Ministers said that the system was 'still in early-stage user testing' but had already cut the amount of time it took officials to research an MP.