logo
SPJIMR holds Annual Convocation 2025 with HUL's Nitin Paranjpe as Chief Guest

SPJIMR holds Annual Convocation 2025 with HUL's Nitin Paranjpe as Chief Guest

Business Upturn22-05-2025

802 graduates embark on a journey to lead with integrity, passion, and a growth mindset
MUMBAI, India , May 22, 2025 /PRNewswire/ — Bharatiya Vidya Bhavan's S.P. Jain Institute of Management and Research, one of India's leading business schools, hosted its Annual Convocation 2025 at its Mumbai campus on Saturday, 17 May. The event celebrated the achievements of over 800 graduating participants from across the Institute's full-time and modular postgraduate programmes, including two (2) Fellow Programme in Management (FPM) scholars.
The Convocation was graced by Mr. Nitin Paranjpe , Non-Executive Chairman of Hindustan Unilever, as Chief Guest. Also, present were Mr. Deepak Parekh , Chair of SPJIMR's Governing Council; Dr. Varun Nagaraj , Dean, SPJIMR; faculty and staff; and family members of the graduating participants.
Delivering the Convocation Address, Mr. Nitin Paranjpe drew upon his decades of global business experience to urge the graduating cohort to lead with purpose, resilience, and a commitment to continuous growth. Speaking of India's bold aspiration to become a Viksit Bharat by 2047, he highlighted the need for a new kind of leadership—one that is ethical, compassionate, and driven by the common good.
Mr. Paranjpe shared three powerful life lessons drawn from his personal and professional journey: 'There's no such thing as a good job or a bad one — it's the mindset you bring that shapes your experience. The real joy, and the foundation for excellence, lies in loving what you do. Don't chase being the best in the world — strive to be the best version of yourself. Continuous self-improvement is your right, your responsibility, and your greatest opportunity. Purpose is that inner force that fuels your energy and passion. When you find what truly drives you, hold on to it — it's the most precious gift you'll ever have.'
In his address, Dean Dr. Varun Nagaraj shared a candid reflection on the challenges graduates face in an uncertain world—geopolitical tensions and the dual promise and peril of AI. He urged them to enter this complex landscape with confidence, guided by SPJIMR's mission of value-based growth and wise innovation.
He reaffirmed SPJIMR's commitment to nurturing leaders who combine knowledge with empathy and impact with purpose: 'Tomorrow you will be a different and better person than today. Embrace a growth mindset—remember, you are a work in progress, not a finished product. The world offers no ready-made answers; solutions come only through hard work and wise innovation.'
Sixteen (16) top achievers across programmes were presented with Scholastic Medals by Chief Guest Mr. Nitin Paranjape, and 50 recipients from the Dean's Honour List received their commendation certificates from the Dean.
Diplomas were conferred across the Institute's full-time and modular programmes, including the Post Graduate Diploma in Management (PGDM), Post Graduate Programme in Management (PGPM), PGDM (Business Management), Post Graduate Executive Management Programme (PGEMP), Post Graduate Programme in Development Management (PGPDM), Post Graduate Programme in General Management (PGPGM), and Post Graduate Programme in Marketing & Business Management (PGPMBM).
The Convocation ceremony concluded with the traditional hat throw, celebrating the graduates' journey and the beginning of their next chapter as responsible, value-based leaders.
About SPJIMR
Bharatiya Vidya Bhavan's S.P. Jain Institute of Management and Research (SPJIMR) is a leading postgraduate management institute, recognised by the Financial Times MiM Global Rankings as India's #1 business school, by Business Today as one of the country's top five business schools, and by the Positive Impact Rating as one of the top five business schools worldwide for societal impact. Known for its innovative and socially conscious approach to management education, research, and community engagement, SPJIMR aims to influence managerial practice and promote the value-based growth of its students, alumni, organisations and their leaders, and society. SPJIMR holds the international 'triple crown' of accreditations from EQUIS, AACSB, and AMBA.
Visit SPJIMR.org for more information.
Photo: https://mma.prnewswire.com/media/2694116/SPJIMR_Guest_Nitin_Paranjape.jpg
View original content to download multimedia:https://www.prnewswire.com/in/news-releases/spjimr-holds-annual-convocation-2025-with-huls-nitin-paranjpe-as-chief-guest-302463079.html
Disclaimer: The above press release comes to you under an arrangement with PR Newswire. Business Upturn takes no editorial responsibility for the same.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes
FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

Forbes

time42 minutes ago

  • Forbes

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

AI at the FDA getty The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule. Dr. Marty Makary, the FDA's commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes. 'Today, we met our goal ahead of schedule and under budget,' said Makary. 'What took one scientific reviewer two to three days [before] The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns. ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks. This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services. Importantly, however, the tool isn't making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment. One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that 'All information stays within the agency. The AI models are not being trained on data submitted by the industry.' That's a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for "air-gapped" AI solutions that keep data locked inside the company. That makes the FDA's model different from many corporate tools, which often rely on open or external data sources. The agency isn't building a public-facing product. It's building a controlled internal system, one that helps it do its job better. Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production. The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It's shaping up to be a key piece of how agencies process data, make recommendations, and act. Makary put it simply that ELSA is just the beginning for AI adoption within the FDA. 'Today's rollout of ELSA will be the first of many initiatives to come,' he said. 'This is how we'll better serve the American people.'​​

American Council of Learned Societies Awards 2025 ACLS Digital Justice Grants
American Council of Learned Societies Awards 2025 ACLS Digital Justice Grants

Yahoo

timean hour ago

  • Yahoo

American Council of Learned Societies Awards 2025 ACLS Digital Justice Grants

Program Supports 15 Digital Humanities Projects that Enrich Cultural Heritage through the Intentional Use of Technology NEW YORK, June 5, 2025 /PRNewswire-PRWeb/ -- The American Council of Learned Societies (ACLS) is pleased to announce the recipients of the 2025 ACLS Digital Justice Grants. This program is made possible by a grant from the Mellon Foundation. The ACLS Digital Justice Grants Program funds digital projects across the humanities and social sciences that critically engage with the interests and histories of people of color and other historically marginalized communities through the ethical use of digital tools and methods. With an increased focus on capacity building, the program also prioritizes projects that bolster the local ecosystem of digital humanities at their academic, community, or cultural heritage institutions, thereby creating opportunities for scholars, especially those within historically underfunded fields, to pursue innovative, diverse digital scholarship. For 2025, eight start-up projects have been awarded ACLS Digital Justice Seed Grants of up to $25,000, and seven established projects have been awarded ACLS Digital Justice Development Grants of up to $100,000. All grantees will have the opportunity to collaborate with the Nonprofit Finance Fund on developing a long-term financial plan for their projects. The 2025 ACLS Digital Justice Grantees mobilize a variety of cutting-edge digital methods, including the ethical development of tools like generative artificial intelligence and augmented reality. The third cohort of this program includes diverse projects that extend the international reach of these grants to the Dominican Republic, Haiti, Nigeria, and Syria, as well as projects lead by scholars along a wide spectrum of career stages and institutional affiliations. "ACLS is proud that this year's Digital Justice Grants include advanced doctoral students and postdocs among the principal investigators and lead scholars on the awarded projects," said Keyanah Nurse, ACLS Senior Program Officer of Intentional Design for an Equitable Academy (IDEA) Programs. "This is an encouraging signal of the growing capacity of pipelines, mentorship networks, and skills-training for those pursuing digital work earlier in their academic careers." Learn more about the 2025 ACLS Digital Justice Seed Grantees and ACLS Digital Justice Development Grantees. Formed a century ago, the American Council of Learned Societies (ACLS) is a nonprofit federation of 81 scholarly organizations. As the leading representative of American scholarship in the humanities and social sciences, ACLS upholds the core principle that knowledge is a public good. In supporting its member organizations, ACLS expands the forms, content, and flow of scholarly knowledge, reflecting our commitment to diversity of identity and experience. ACLS collaborates with institutions, associations, and individuals to strengthen the evolving infrastructure for scholarship. The Andrew W. Mellon Foundation is the nation's largest supporter of the arts and humanities. Since 1969, the Foundation has been guided by its core belief that the humanities and arts are essential to human understanding. The Foundation believes that the arts and humanities are where we express our complex humanity, and that everyone deserves the beauty and empowerment that can be found there. Through our grants, we seek to build just communities enriched by meaning and guided by critical thinking, where ideas and imagination can thrive. Media Contact Anna Polovick Waggy, American Council of Learned Societies, 6468307661, awaggy@ View original content to download multimedia: SOURCE American Council of Learned Societies Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook
Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

New York Times

timean hour ago

  • New York Times

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

Picture this: You give a bot notice that you'll shut it down soon, and replace it with a different artificial intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you've been having an affair. The bot threatens you, telling you that if the shutdown plans aren't changed, it will forward the emails to your wife. This scenario isn't fiction. Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior. Despite some misleading headlines, the model didn't do this in the real world. Its behavior was part of an evaluation where we deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks, much like an airplane manufacturer might test a plane's performance in a wind tunnel. We're not alone in discovering these risks. A recent experimental stress-test of OpenAI's o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons. None of this diminishes the vast promise of A.I. I've written at length about how it could transform science, medicine, energy, defense and much more. It's already increasing productivity in surprising and exciting ways. It has helped, for example, a pharmaceutical company draft clinical study reports in minutes instead of weeks and has helped patients (including members of my own family) diagnose medical issues that could otherwise have been missed. It could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life. This amazing potential inspires me, our researchers and the businesses we work with every day. But to fully realize A.I.'s benefits, we need to find and fix the dangers before they find us. Every time we release a new A.I. system, Anthropic measures and mitigates its risks. We share our models with external research organizations for testing, and we don't release models until we are confident they are safe. We put in place sophisticated defenses against the most serious risks, such as biological weapons. We research not just the models themselves, but also their future effects on the labor market and employment. To show our work in these areas, we publish detailed model evaluations and reports. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store