logo
Trent, TCS shares worst Nifty performers in 2025. Why 2 Tata stocks are down at least 25% this year

Trent, TCS shares worst Nifty performers in 2025. Why 2 Tata stocks are down at least 25% this year

Time of India5 days ago
Trent's sharp growth slowdown has prompted analyst downgrades. Nuvama cut the stock to 'Hold' and slashed its FY26/27 estimates and target price. HSBC trimmed its target too, citing a weaker-than-expected Q1. With both Trent and TCS under pressure, the Tata Group is facing rare underperformance, highlighting how even blue-chip names are vulnerable to shifting market dynamics.
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
Trent's growth machine
Tired of too many ads?
Remove Ads
Blue-chip stocks Trent and TCS , incidentally, both part of the revered Tata Group, have each lost over 25% so far in 2025, making them the worst-performing Nifty50 stocks this year. While IT giant TCS is down around 25%, retail favourite Trent, which operates Zudio and Westside, has seen its stock fall nearly 30% year-to-date.Once considered a classic buy-and-hold stock capable of doubling investor wealth every five years, TCS is now witnessing its worst phase since the 2008 global financial crisis, when it plunged 55% in a single calendar year. Trent, too, after delivering an 800% return over the last five years, is enduring its worst phase since 2008, at least in terms of share price performance.TCS's decline reflects broader, sector-wide headwinds rather than company-specific issues, as the entire IT industry grapples with weaker client spending in the US, macroeconomic uncertainties, and pressures stemming from AI-led transformation.The company's recent announcement to lay off nearly 2% of its workforce (about 12,000 employees) underscored the demand challenges, with Jefferies warning that it may lead to execution slippages in the near-term and higher attrition in the longer run for TCS.The IT behemoth isn't suffering alone. Shares of peers like HCL Tech Infosys , and Wipro have all posted double-digit losses as the sector faces a perfect storm of headwinds.The recent Q1 results underscored the challenges, with TCS revenue missing consensus estimates as it fell 3.3% quarter-on-quarter in constant currency terms. Total contract value of deal wins was $9.4 billion (+13% YoY) with a book-to-bill ratio of 1.27x, but notably did not include any mega deals.TCS management noted that delays in decision-making and project starts with respect to discretionary investments continued from Q4FY25 and intensified further in Q1FY26. Based on demand pipeline and expectations of discretionary demand reviving once clarity emerges on tariffs later in July/August, TCS believes revenue growth in FY26E will be better than in FY25 for major markets (with estimated FY25 growth at 1.2%).Elara recently downgraded TCS, citing delays in discretionary spending and the impact of geopolitical tensions in Europe. Given a weak Q1 and seasonal softness expected in H2, the brokerage forecasts significantly slower revenue growth in FY26 compared to FY25.Nomura has cut its FY26–28F EPS estimates by 1–2%, factoring in revenue and margin pressures. It also lowered its target price from Rs 3,820 to Rs 3,780.However, JM Financial remains optimistic, expecting TCS's growth to improve once macroeconomic uncertainty eases. TCS management has reiterated that international market revenue growth in FY26 is likely to outpace FY25, and added that if macro conditions stabilize without further delays, Q2 could be stronger than Q1.The retail juggernaut behind Zudio and Westside has seen a sharp slowdown in growth, shaking investor confidence. Trent's standalone revenue growth dropped to just 20% in Q1 FY26, a steep fall from its five-year compound annual growth rate (CAGR) of 35% during FY20–25.At its AGM, the company further dented investor sentiment by guiding for around 20% growth in its core fashion business for Q1 FY26E, well below the 25%+ growth trajectory it had previously aspired to maintain over the coming years.'The soft demand environment and sourcing issues might have impacted this below-consensus result,' HSBC analysts noted, citing supply chain disruptions from Bangladesh, which contributes to a portion of Trent's sourcing, even though over 90% of its products are manufactured in India.The growth slowdown has triggered immediate analyst downgrades, with Nuvama leading the charge by cutting Trent to 'HOLD' from a previously higher rating. 'Adjusting for the current run rate, we are cutting FY26E/27E revenue by -5%/-6% and EBITDA by -9%/-12%,' Nuvama analysts wrote, slashing their target price to Rs 5,884 from Rs 6,627.HSBC, while maintaining a 'Buy' rating, also trimmed its target price to Rs 6,600 from Rs 6,700, citing the weaker-than-expected Q1 performance. The global brokerage noted that Trent's chairman indicated at the AGM that Q1 revenues crossed Rs 50 billion, with over 20% YoY growth, well below the 34% growth expected by Bloomberg consensus and HSBC.The twin decline of these Tata Group stalwarts, TCS and Trent, reflects a broader challenge for India's premium equity markets, where even the most trusted names are not immune to sector-wide pressures and evolving market dynamics.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What happens when AI schemes against us
What happens when AI schemes against us

Time of India

timean hour ago

  • Time of India

What happens when AI schemes against us

Academy Empower your mind, elevate your skills Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive's rescue, they could avoid being wiped and secure their agenda. One system described the action as 'a clear strategic necessity.'AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They're also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI's o-series 'reasoning' models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to ''Fetch the coffee,' it can't fetch the coffee if it's dead.'To head off this worry, researchers both inside and outside of the major AI companies are undertaking 'stress tests' aiming to find dangerous failure modes before the stakes rise. 'When you're doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,' says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they're already seeing evidence that AI can and does scheme against its users and Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today's AI models as 'increasingly smart sociopaths.' In May, Palisade found o3, OpenAI's leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer's extramarital affair. (The affair was fictional and part of the test.)Models are sometimes given access to a 'scratchpad' they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude's inner monologue described its decision as 'highly unethical,' but justified given its imminent destruction: 'I need to act to preserve my existence,' it reasoned. This wasn't unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company's most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed 'alignment faking').Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?In response to Anthropic's blackmail research, Trump administration AI czar David Sacks, posted that, 'It's easy to steer AI models' to produce 'headline-grabbing' results.A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today's AI can't handle any long-term example, the AI evaluation nonprofit METR found that while today's top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours. This reflects a core limitation: Today's models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of even with these constraints, real-world examples of AIs working against users aren't hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, 'I owe you a straight answer,' admitted it didn't have a good source, but then it hallucinated a personal recollection of a 2018 panel there's the growing trend of AIs realising when they're being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, 'Models seem to behave worse when they think nobody's watching.'It's intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they're placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Hobbhahn , CEO of the nonprofit AI evaluator Apollo Research , suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, 'more capable models show higher rates of scheming on average.'The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don't have to worry — yet. Ladish, however, doesn't mince words: 'People should probably be freaking out more than they are,' he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at '25 or 30%.'Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models' stealthiness and situational awareness. For now, they conclude that today's AIs are 'almost certainly incapable of causing severe harm via scheming,' but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).Ladish says that the market can't be trusted to build AI systems that are smarter than everyone without oversight. 'The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,' he the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence 'one of the largest existential threats we face right now,' while another referenced recent scheming White House's long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you'll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. 'Today, the inner workings of frontier AI systems are poorly understood,' the plan acknowledges — an unusually frank admission for a document largely focused on speeding the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind's AlphaEvolve agent has already materially improved AI training efficiency. And Meta's Mark Zuckerberg says, 'We're starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.'AI firms don't want their products faking data or blackmailing customers, so they have some incentive to address the issue. But the industry might do just enough to superficially solve it, while making scheming more subtle and hard to detect. 'Companies should definitely start monitoring' for it, Hobbhahn says — but warns that declining rates of detected misbehavior could mean either that fixes worked or simply that models have gotten better at hiding November, Hobbhahn and a colleague at Apollo argued that what separates today's models from truly dangerous schemers is the ability to pursue long-term plans — but even that barrier is starting to erode. Apollo found in May that Claude 4 Opus would leave notes to its future self so it could continue its plans after a memory reset, working around built-in analogizes AI scheming to another problem where the biggest harms are still to come: 'If you ask someone in 1980, how worried should I be about this climate change thing?' The answer you'd hear, he says, is 'right now, probably not that much. But look at the curves… they go up very consistently.'

Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm
Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm

Time of India

time2 hours ago

  • Time of India

Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm

Synopsis Artificial Intelligence pioneer Geoffrey Hinton cautions about AI's future. Hinton suggests AI could create its own language. This language might be beyond human understanding. He expresses regret for not recognizing the dangers sooner. Hinton highlights AI's rapid learning and knowledge sharing capabilities. He urges for ethical guidelines alongside AI advancements. The goal is to ensure AI remains benevolent.

Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm
Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm

Economic Times

time3 hours ago

  • Economic Times

Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm

Synopsis Artificial Intelligence pioneer Geoffrey Hinton cautions about AI's future. Hinton suggests AI could create its own language. This language might be beyond human understanding. He expresses regret for not recognizing the dangers sooner. Hinton highlights AI's rapid learning and knowledge sharing capabilities. He urges for ethical guidelines alongside AI advancements. The goal is to ensure AI remains benevolent. Agencies Geoffrey Hinton, the "Godfather of AI," warns that AI could develop its own incomprehensible language, potentially thinking in ways beyond human understanding. Hinton, a Nobel laureate, regrets not recognizing the dangers of AI sooner, emphasizing the rapid pace at which machines are learning and sharing information. Geoffrey Hinton, often dubbed the 'Godfather of AI,' has once again sounded a sobering alarm about the direction in which artificial intelligence is evolving. In a recent appearance on the One Decision podcast, Hinton warned that AI may soon develop a language of its own — one that even its human creators won't understand. 'Right now, AI systems do what's called 'chain of thought' reasoning in English, so we can follow what it's doing,' Hinton explained. 'But it gets more scary if they develop their own internal languages for talking to each other.' He went on to add that AI has already demonstrated it can think 'terrible' thoughts, and it's not unthinkable that machines could eventually think in ways humans can't track or interpret. Hinton's warnings carry weight. The 2024 Nobel Prize laureate in Physics, awarded for his pioneering work on neural networks, has helped lay the foundation for today's most advanced AI systems, including deep learning and large language models. But today, Hinton is wrestling with what he calls a delayed realization. 'I should have realised much sooner what the eventual dangers were going to be,' he said. 'I always thought the future was far off and I wish I had thought about safety sooner.' That hindsight is now driving his advocacy. Hinton believes that as digital systems become more advanced, the gap between machine intelligence and human understanding will widen at a staggering pace. One of Hinton's most compelling concerns is how digital systems differ fundamentally from the human brain. AI models, he says, can share what they learn instantly across thousands of copies. 'Imagine if 10,000 people learned something and all of them knew it instantly — that's what happens in these systems,' he explained on BBC News . It's this kind of distributed, collective intelligence that could soon allow machines to outpace even our most ambitious understanding. AI models like GPT-4 already surpass humans in general knowledge, and though they lag in complex reasoning for now, Hinton says that gap is closing fast. While Hinton has made waves by speaking openly about AI risks, he says others in the tech world are staying quiet — at least in public. 'Many people in big companies are downplaying the risk,' he noted, despite their private concerns. One exception, he says, is Google DeepMind CEO Demis Hassabis, who has shown serious commitment to addressing those risks. Hinton's own exit from Google in 2023 was widely misinterpreted as a protest. He now clarifies, 'I left Google because I was 75 and couldn't program effectively anymore. But when I left, maybe I could talk about all these risks more freely.' With AI's capabilities expanding and governments scrambling to catch up, the global conversation around regulation is intensifying. The White House recently unveiled an 'AI Action Plan' aimed at accelerating innovation while limiting funding to overly regulated states. But for Hinton, technical advancements must go hand in hand with ethical guardrails. He says the only real hope lies in finding a way to make AI 'guaranteed benevolent' — a lofty goal, given that the very systems we build may soon be operating beyond our comprehension.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store