
Top stocks to buy: Stock recommendations for the week starting June 30, 2025
Stock market recommendations:
According to Motilal Oswal Financial Services Ltd, the top stock picks for the week (starting June 30, 2025) are Federal Bank, and Prince Pipes. Let's take a look:
Name
CMP
Target
Upside
Federal Bank
209
250
20%
Prince Pipes
358
500
40%
Federal Bank
Federal Bank has demonstrated strong business growth, rebalancing its portfolio toward medium- and high-yielding segments like LAP, used CVs, gold loans & credit cards to drive profitability.
We estimate loan growth to sustain at 17% CAGR over FY25-28E, with asset quality remaining robust. Deposit growth is expected to accelerate at 15% CAGR over FY25–28, driven by a CA-led CASA push, a stronger NR franchise, and branch realignment.
CASA ratio is expected to rise to 34–35% by FY28E.
Asset quality remains strong with GNPA/NNPA at 1.84%/0.44% and Prov. coverage ratio above 75%. Under new CEO Mr. KVS Manian, FB is addressing its gaps and pivoting toward sustainable, return-driven growth across businesses and geographies.
We estimate RoA/RoE at 1.4%/15.6% by FY28E, driven by better margins, asset mix shift, and improved cost efficiency. FB is one of our preferred BUY-rated ideas among mid-size private banks.
Prince Pipes
Prince Pipes is among India's top five plastic piping providers and operates seven plants (398K MTPA by FY25) with 7,200+ SKUs and a 1,500+ distributor network. With ~25% of revenue from CPVC and ~70% from real estate, PRINCPIP is set to benefit from India's growing real estate sector.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
These Are The Most Beautiful Women In The World
5minstory.com
Undo
Further, strategic expansion in East India, premium product launches, and government infrastructure projects further drive growth. Its revenue/EBITDA/Adj. PAT would report a robust CAGR of 15%/38%/73% over FY25-FY28E due to a low base, driven by 12% volume CAGR.
Stay informed with the latest
business
news, updates on
bank holidays
and
public holidays
.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 hours ago
- Mint
WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says
AP Updated 6 Aug 2025, 07:58 PM IST NEW YORK (AP) — WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam centers' targeting people online around that world, its parent company Meta said this week. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams — including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world — with too-good-to-be-true offers and unsolicited messages attempting to steal consumers' information or money filling our phones, social media and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centers, which often span from forced labor operated by organized crime — and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, the California-based company said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps — as well as TikTok, Telegram and AI-generated messages made using ChatGPT — to offer payments for fake likes, enlist people into a pyramid scheme and/or lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia — and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.


Economic Times
2 hours ago
- Economic Times
Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm
Synopsis Artificial intelligence tools are becoming more common. Studies show over-reliance on AI may weaken human skills. Critical thinking and emotional intelligence are important. Businesses invest in AI but not human skills. MIT research shows ChatGPT use reduces memory retention. Users become passive and trust AI answers too much. Independent thinking is crucial for the future. iStock A new study reveals that over-reliance on AI tools may diminish essential human skills like critical thinking and memory. Businesses investing heavily in AI risk undermining their effectiveness by neglecting the development of crucial human capabilities. (Image: iStock) In a world racing toward artificial intelligence-driven efficiency, the question is no longer just about automation stealing jobs, it's about AI gradually chipping away at our most essential human abilities. From creativity to memory, critical thinking to ethical judgment, new research shows that our increasing dependence on AI tools may be making us less capable of using them major studies, one by UK-based learning platform Multiverse and another from the prestigious MIT Media Lab, paint a concerning picture: the more we lean on AI, the more we risk weakening the very cognitive and emotional muscles that differentiate us from the machines we're building. According to a recent report by Multiverse, businesses are pouring millions into AI tools with the promise of higher productivity and faster decision-making. Yet very few are investing in the development of the human skills required to work alongside AI effectively."Leaders are spending millions on AI tools, but their investment focus isn't going to succeed," said Gary Eimerman, Chief Learning Officer at Multiverse. "They think it's a technology problem when it's really a human and technology problem."The research reveals that real AI proficiency doesn't come from mastering prompts — it comes from critical thinking, analytical reasoning, creative problem-solving, and emotional intelligence. These are the abilities that allow humans to make meaning from what AI outputs and to question what it cannot understand. Without these, users risk becoming passive consumers of AI-generated content rather than active interpreters and decision-makers. The Multiverse study identified thirteen human capabilities that differentiate a casual AI user from a so-called 'power user.' These include resilience, curiosity, ethical oversight, adaptability, and the ability to verify and refine AI output.'It's not just about writing prompts,' added Imogen Stanley, a Senior Learning Scientist at Multiverse. 'The real differentiators are things like output verification and creative experimentation. AI is a co-pilot, but we still need a pilot.'Unfortunately, as AI becomes more accessible, these skills are being underutilized and in some cases, lost this warning, a separate study from the MIT Media Lab examined the cognitive cost of relying on large language models (LLMs) like ChatGPT. Over a four-month period, 54 students were divided into three groups: one used ChatGPT, another used Google, and a third relied on their own knowledge alone. The results were sobering. Participants who frequently used ChatGPT not only showed reduced memory retention and lower scores, but also diminished brain activity when attempting to complete tasks without AI assistance. According to the researchers, the AI users performed worse 'at all levels: neural, linguistic, and scoring.'Google users fared somewhat better, but the 'Brain-only' group, those who engaged with material independently, consistently outperformed the others in depth of thought, originality, and neural ChatGPT and similar tools offer quick answers and seemingly flawless prose, the MIT study warns of a hidden toll: mental passivity. As convenience increases, users become less inclined to question or evaluate the accuracy and nuance of AI responses.'This convenience came at a cognitive cost,' the MIT researchers wrote. 'Diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.'This passivity can lead to over-trusting AI-generated answers, even when they're factually incorrect or ethically biased, a concern that grows with each advancement in generative the numbers and neural scans lies a deeper question: what kind of future are we building if we lose the ability to think, question, and create independently?


Time of India
2 hours ago
- Time of India
Google commits $1 billion for AI training at US universities
Academy Empower your mind, elevate your skills Alphabet's Google on Wednesday announced a three-year, $1 billion commitment to provide artificial intelligence training and tools to US higher education institutions and than 100 universities have signed on to the initiative so far, including some of the nation's largest public university systems such as Texas A&M and the University of North schools may receive cash funding and resources, such as cloud computing credits towards AI training for students as well as research on AI-related billion-dollar figure also includes the value of paid AI tools, such as an advanced version of the Gemini chatbot , which Google will give to college students for free. Google hopes to expand the program to every accredited nonprofit college in the US and is discussing similar plans in other countries, senior vice president James Manyika said in an declined to specify how much Google is earmarking in direct funds to external institutions relative to footing its own cloud and subscription bills. The announcement comes as rivals like OpenAI, Anthropic and Amazon have made similar pushes around AI in education as the technology pervades society. Microsoft in July pledged $4 billion to bolster AI in education evangelising their products to students, tech firms further stand to win business deals once those users enter the workforce.A growing body of research has mapped concerns around AI's role in education, from enabling cheating to eroding critical thinking, prompting some schools to consider bans. Manyika said Google had not faced resistance from administrators since it began to plot its education initiative earlier this year, but "many more questions" about AI-related concerns remain."We're hoping to learn together with these institutions about how best to use these tools," he said, adding that the insights could help shape future product decisions.