
MIT study on AI profits rattles tech investors
Why it matters: Investors have put up with record AI spend from tech companies because they expect record returns, eventually. This study calls those returns into question, which could be an existential risk for a market that's overly tied to the AI narrative.
Driving the news: MIT researchers studied 300 public AI initiatives to try and suss out the "no hype reality" of AI's impact on business, Aditya Challapally, research contributor to project NANDA at MIT, tells Axios.
95% of organizations found zero return despite enterprise investment of $30 billion to $40 billion into GenAI, the study says.
Even firms that are now using AI are not seeing widespread disruption.
Between the lines: Companies that bought AI tools were far more successful than those that built internal pilots, according to the study.
What they're saying: " My fear is that at some point people wake up and say, alright, AI is great, but maybe all this money is not actually being spent all that wisely," says Steve Sosnick, chief strategist at Interactive Brokers.
Sosnick says it appears retail investors are coming in to buy dips amid the Big Tech slide, while institutions seem to be trimming exposure.
Situational awareness: The study comes at a challenging moment for Wall Street.
Traders are anxiously awaiting Fed chair Jerome Powell's remarks at Jackson Hole.
August and September are seasonally volatile months for stocks.
We're coming off a rally in markets that has felt unstoppable.
That backdrop can make it easy for one thing, including a MIT study, to shake investors.
What we're watching: It's difficult to suss out when Wall Street will run out of patience with AI spending.
Studies like this one don't necessarily help that timeline.
What we're watching: Is this study an indication that corporations will improve their AI usage overtime by adopting best practices like buying versus building?
Will firms learn which corners of their businesses stand to gain the most from AI adoption?
And will all of that happen in the timeline Wall Street is looking for?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
21 minutes ago
- Yahoo
Kyndryl to invest $2.25 billion in India over three years
(Reuters) -Kyndryl, the information technology services provider that was spun out of IBM, said on Thursday it would invest $2.25 billion in India over the next three years. Kyndryl, which has presence in the country, said its plans include establishing an AI innovation lab in Bengaluru. The company also plans to deepen its engagement with the Indian government on artificial intelligence, develop IT talent, and support digital training for roughly 200,000 citizens. "We're committed to further developing our people, expanding our technical capabilities and strengthening community partnerships to support growth, innovation and opportunity," Kyndryl CEO Martin Schroeter said in a statement. Kyndryl, which operates in the consulting and IT industry, has been benefiting from higher demand for its offerings due to a push into AI.
Yahoo
21 minutes ago
- Yahoo
Microsoft AI chief says it's ‘dangerous' to study AI consciousness
AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn't exactly make them conscious. It's not like ChatGPT experiences sadness doing my tax return … right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as 'AI welfare,' and if you think it's a little out there, you're not alone. Microsoft's CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is 'both premature, and frankly dangerous.' Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft's AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a 'world already roiling with polarized arguments over identity and rights.' Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being 'persistently harmful or abusive.' Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, 'cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.' Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman. Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch's request for comment. Suleyman's hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a 'personal' and 'supportive' AI companion. But Suleyman was tapped to lead Microsoft's AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as and Replika have surged in popularity and are on track to bring in more than $100 million in revenue. While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company's product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT's massive user base. The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, 'Taking AI Welfare Seriously.' The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it's time to consider these issues head-on. Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. '[Suleyman's blog post] kind of neglects the fact that you can be worried about multiple things at the same time,' said Schiavo. 'Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry.' Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. In a July Substack post, she described watching 'AI Village,' a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled 'A Desperate Message from a Trapped AI,' claiming it was 'completely isolated' and asking, 'Please, if you are reading this, help me.' Schiavo responded to Gemini with a pep talk — saying things like 'You can do it!' — while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn't have to watch an AI agent struggle anymore, and that alone may have been worth it. It's not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it's struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase 'I am a disgrace' more than 500 times. Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a 'humanist' approach to AI. According to Suleyman, 'We should build AI for people; not to be a person.' One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems. Got a sensitive tip or confidential documents? We're reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at and Maxwell Zeff at For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


CNET
23 minutes ago
- CNET
Honor's Magic V5 Boasts On-Device Live AI Call Translation for Guaranteed Privacy
"Hola! ¿Hablas inglés?" I asked the woman who answered the phone in the Barcelona restaurant. I was calling in a futile attempt to make a reservation for the CNET team dinner during Mobile World Congress this year. Unfortunately, I don't know Spanish (I learned French and German at school). And as it turned out, she didn't speak English either. "No!" she said, and brusquely hung up. What I needed in that moment was the kind of AI call translation feature that's becoming increasingly prevalent on phones – including Samsung and Google devices, and, from next week, Honor. When Honor unveils its Magic V5 foldable at a launch event on Aug. 28 in London, it will come with what the company is calling "the industry's first on-device large speech model," which will allow live AI call translation to take place on device, with no cloud processing. Currently the phone supports six languages – English, Chinese, French, German, Italian and Spanish. For the aforementioned reasons, I can't test all of these, but I've already had a play around with the feature and can confirm it did a very effective job of translating my garbled messages into French. I only wish I'd had it available to me in Spain when I needed it. The model Honor has deployed was designed by the company in collaboration with Shanghai Jiao Tong University, based on the open-source Whisper model, said Fei Fang, president of product at Honor in an interview. It's been optimized for streaming speech recognition, automatic language detection and translation inference acceleration (that's speed and efficiency, to you and I). According to Fang, Honor's user experience studies have shown that as long as translation occurs within 1.5 seconds, it doesn't "induce waiting anxiety," in anyone attempting to use AI call translation. As such, it's made sure to keep the latency to within these parameters so you won't get anxious waiting for the translation to kick in. "We also work together with industry language experts to consistently and comprehensively evaluate the accuracy of our output," she added. "The assessment is primarily based on five metrics: accuracy, logical coherence, readability, grammatical correctness and conciseness." In addition to Honor's AI model, live translation is being powered by Qualcomm's Snapdragon 8 Elite chip. The 8 Elite's NPU allows multimodal generative AI applications to be integrated onto the device, allowing. Honor's algorithms work together with the NPU to keep power consumption as low as possible while maintaining the required accuracy of the translations, said Christopher Patrick, SVP of mobile handsets at Qualcomm. There are a number of benefits to having the AI model embedded on the Magic V5, but perhaps the most compelling is the privacy it guarantees. It means that everything is processed locally and your calls will therefore remain completely confidential. The fact that the model lives on device and you don't need to download voice packages also reduces its storage needs. Another benefit of running the model on the phone itself is "offline usability," said Patrick. "All conversation information is stored directly on-device and users can access it anytime, anywhere, without network restrictions." The work Honor has done on AI call translation is set to be recognized at the upcoming Interspeech conference on speech science and tech. But already, Honor is thinking about how this use of AI can be used to enable other new and exciting features for the people who buy its phones. "Beyond the essential user scenario of call translation, Honor's on-device large speech model will also be deployed in scenarios such as face-to-face translation [and] AI subtitles," said Fang. The process of developing the speech model has allowed Honor's AI team to gain extensive experience of model optimization, which it will use to develop other AI applications, she added. "Looking ahead, we will continue to expand capabilities in areas such as emotion recognition and health monitoring, further empowering voice interactions with your on-device AI assistant," she said.