
Japan's SoftBank Posts $7.8 Bn Annual Net Profit
Japanese tech investor SoftBank Group, a major player in the US Stargate artificial intelligence drive, on Tuesday posted a bumper full-year net profit of $7.8 billion.
The 1.15 trillion yen net profit for the 12 months to March 2025 was up from a net loss of 227 billion yen in the previous financial year.
The company's earnings often swing dramatically because it invests heavily in tech start-ups and semiconductor firms, whose share prices are volatile.
Tuesday's result marked its first full-year net profit since the 2020-21 financial year.
SoftBank Group has been actively investing in AI in recent years under its flamboyant founder and CEO Masayoshi Son.
He has repeatedly said "artificial superintelligence" will arrive in a decade -- bringing new inventions, medicine, knowledge and ways to invest.
The company is leading the $500 billion Stargate project to build AI infrastructure in the United States along with cloud giant Oracle and ChatGPT-maker OpenAI.
SoftBank Group and OpenAI announced in February that the Japanese giant would spend $3 billion annually to deploy OpenAI's technologies across its group companies.
SoftBank also said in March it had reached a deal to acquire US semiconductor firm Ampere for $6.5 billion, reinforcing its aggressive push into AI. The purchase is expected to close in the second half of the year.
The Japanese company is a majority shareholder in Arm Holdings, whose technology is used in 99 percent of smartphones.
Hideki Yasuda, an analyst at brokerage Toyo Securities, told AFP ahead of the announcement that he expected the firm to reveal strong figures.
"The market was not bad from January to March, so I think (the results) will land relatively well," he said.
"The market environment only worsened from the end of March to the beginning of April when the tariffs were announced," he said, referring to US President Donald Trump's multi-pronged free trade war.
"When AI first came out, it was mostly a dream and we couldn't do anything with it," Yasuda said. "But as of 2025, we are at the stage where it will be put to practical use, so the company is now investing in various businesses that are using AI."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
10 hours ago
- Int'l Business Times
Apple Boosts On-Device AI With Live Translation, Screen Smarts, and Dev Access
During Apple's annual WWDC 2025 conference live stream Monday, the company took some time to outline the next phase of Apple Intelligence, adding real-time translation, expanded visual recognition tools, and on-device AI for developers. The tools work across the entire ecosystem, said Apple, including iPhone, iPad, Mac, Apple Watch, and Vision Pro, and will launch with the fall software updates. This is a shift from Apple talking about consumer-facing AI features to really focus in on deeper, system-level integration. Apple Intelligence will power Live Translate in calls and messages, identify (and interact with) content on screen, and push further into Image Playground and Genmoji apps. For developers, Apple will let them access its foundational on-device models, making it easier to provide private, AI-enhanced experiences across apps without having to send data to the cloud. This echoes Apple's long-standing stance on privacy and hardware, which contrasts with more cloud-based AI providers like Open AI and Google. "Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems," said Craig Federighi, Apple's senior vice president of Software Engineering, during the keynote. "We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create." On Device Live Translation A new Apple Intelligence feature enables real-time translation in Messages, letting users chat across languages without leaving the app. Apple promises that Apple Intelligence will come to eight mor languages by the end of the year, too, including Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese. This will no doubt help the Live Translation feature, which will be integrated into Messages, FaceTime, and Phone apps, enabled by Apple's models that run on device as well. Your conversations are yours, promises the company, and even it cannot read them. Messages will translate your text on the fly, letting you communicate with speakers of otehr languages in real. time. IN Facetime, there'll be translated live captions to read along with what your conversation partner is saying, while the Phone app will turn those captions into audio translations during the whole conversation. Genmoji and Image Playground Updates Image Playground, powered by Apple Intelligence, lets users generate styled visuals on-device—including oil painting effects—using prompts and preset suggestions. If you're into messing around with AI image generation, Genmoji and Image Playground improvements should be right up your alley. In addition to creating your own Genmojis with text descriptions, now you can mix emoji together and combine them with descriptions. You'll also be able to change expressions and things like hairstyles in Image Playground and Genmoji when you base your creations on your family and friends. Image Playground gets more ChatGPT integrations, letting you create images with new styles like oil painting or vector art with a tap or a text description. This of course is not processed on-device; Apple will send the images and descriptions to ChatGPT, which will then return the image to you. You'll have to opt in for that to happen, as well. Visual Intelligence Sees your Screen Finally, you can have Apple Intelligence interact with the things you see on your screen. This new feature will let you take a screenshot and ask ChatGPT questions about what you're seeing on the screen. In true corporate style, the demo showed someone taking a screenshot of a jacket that they then asked to buy, and highlighting a lamp they also wanted to buy. Whether this function will do more than find you places to buy things is yet to be seen. One interesting use case, though, is the ability to take a screenshot of an event poster and have Visual Intelligence add it to your calendar. That sounds pretty great, to be honest. Add that concert to my calendar, Apple! Workout Buddy Brings Apple Intelligence to Apple Watch A user runs down a flight of outdoor stairs, tracking fitness progress with Apple Watch's Move ring feature and Workout Buddy. Fitness fan? Workout Buddy on Apple Watch will take your fitness and workout data and generate "personalized, motivational insights" during your session. Apple wrote: "To offer meaningful inspiration in real time, Workout Buddy analyzes data from a user's current workout along with their fitness history, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more. A new text-to-speech model then translates insights into a dynamic generative voice built using voice data from Fitness+ trainers, so it has the right energy, style, and tone for a workout. Workout Buddy processes this data privately and securely with Apple Intelligence." Workout Buddy needs an Apple Watch, Bluetooth headphones, and an Apple Intelligence-capable iPhone nearby. Originally published on Tech Times


Int'l Business Times
12 hours ago
- Int'l Business Times
Social Security Official Wants More AI Use in the Agency Despite Some Reports of Bots Being Unhelpful
The Social Security Administration's new chief, Frank Bisignano, says he's doubling down on the use of artificial intelligence and tech tools to streamline services, despite concerns from some users and employees about reliability and privacy. Bisignano, who took office on May 7, wants to overhaul the agency's customer service model by working closely with Elon Musk's Department of Government Efficiency (DOGE), which has sent engineers to assist with technology upgrades, the Wall Street Journal reported. This comes just months after DOGE triggered backlash by slashing Social Security jobs and attempting to close field offices, prompting widespread public outcry and a legal challenge that temporarily restricted its access to agency data. On Friday, that order was lifted by the Supreme Court. Today, Bisignano says the focus is on making the agency "digital-first," cutting wait times and encouraging more Americans to use self-service tools, like online Social Security card replacement. The administration aims to reduce the average wait time on its 1-800 number from 19.2 minutes to 12 by September 2026 and process all retirement and Medicare claims on time. AI now handles nearly half of customer calls, up from 17% last October, though some users report getting stuck in response loops. Next, the agency plans to expand AI's use in field offices and improve fraud detection, targeting billions in overpayments that cost the program nearly $8.9 billion in 2023. Bisignano says AI can help prevent waste while freeing up staff to focus on more complex cases. Originally published on Latin Times


DW
15 hours ago
- DW
AI pioneer Geoffrey Hinton urges tougher government rules – DW – 05/22/2025
05/22/2025 May 22, 2025 Trailblazing British-Canadian computer scientist Geoffrey Hinton says AI poses a myriad of challenges. Hinton, whose pioneering work earned him the title Godfather of AI, would like to see effective and strict government regulation of AI.