logo
#

Latest news with #BalaramanRavindran

India needs AI. But it also needs jobs. A balance must be found
India needs AI. But it also needs jobs. A balance must be found

Indian Express

time19-07-2025

  • Business
  • Indian Express

India needs AI. But it also needs jobs. A balance must be found

Written by Balaraman Ravindran, Omir Kumar, and Krishnan Narayanan A few weeks back, two influential voices in India's economic landscape, N Chandrasekaran, Chairman of Tata Sons, and Chief Economic Advisor (CEA) V Anantha Nageswaran, offered important and complementary perspectives on artificial intelligence (AI). Chandrasekaran, in the TCS' FY2025 annual report, described Generative AI (GenAI) as a 'civilisational shift,' with the rise of AI agents and autonomous robots ushering in a future of 'dark factories' and AI-assisted enterprise functions. He also highlighted the 'human+AI model' of delivering solutions. Meanwhile, CEA Nageswaran, speaking at the CII Annual Business Summit, issued a note of caution: AI deployment is not inevitable. He reminded the private sector that India needs to create at least 8 million jobs a year. Hence, businesses must consider where to stop automating and instead choose labour. This tension, between AI's promise of productivity and its peril for employment, is now central to India's growth trajectory. The Indian Economic Survey 2024–25 had flagged the impact of AI on labour markets as a policy imperative. As AI becomes more capable and less costly, low-value service jobs, especially in India's labour-surplus economy, become increasingly vulnerable. An IIM Ahmedabad 2024 survey found that 68 per cent of Indian white-collar workers expect their roles to be partially or fully automated within five years; 40 per cent believe AI will make their skills redundant. Clearly, India must act. But rather than resisting the AI wave, the real challenge lies in shaping it so that technology augments, rather than replaces, human potential. India needs a three-pronged institutional architecture, one that enables workers through education and skilling, insures against displacement, and stewards the broader social and economic transition. The enabling agenda is critical. In Learning by Doing, the economist James Bessen shows that technologies like the spinning mule took several decades to be fully adopted during the Industrial Revolution, not due to access issues but because it took time for workers and firms to learn how to use them effectively. What about AI? Unlike the spinning mule, AI is not a single tool. It is a fast-growing mix of powerful and diverse technologies. This increases both the challenge and the opportunity. India's skilling efforts must be agile and keep pace with this growth in AI. India should ensure that the benefits from AI do not accrue just to a narrow band of skilled workers. Instead, vocational training, on-the-job learning, and open knowledge-sharing on leveraging AI on the factory floor and across all services must be embedded into national skilling programs. At the same time, the government must insure against job losses and dislocation. Workers affected by automation need social protection, access to reskilling pathways, and incentives to transition to adjacent roles. The skilling effort cannot be limited to college education; it must also accommodate informal workers and mid-career transitions. India also needs stewarding institutions that ensure AI is deployed responsibly, transparently, and inclusively. They are tasked with identifying emerging risks, conducting foundational safety research, and setting standards. They should also research how to build effective human-AI teams, based on a deep understanding of socio-economic value, the availability of human skills, and evolving capabilities of AI. This will be essential for designing job roles and workplace structures that make the most of co-intelligence. Augmentation means AI systems assist humans, enhancing their judgement, creativity, and productivity, rather than replacing them outright. Take agriculture. Instead of replacing existing workers, AI-based agri-chatbots can empower farmers with timely advice on weather, pests, and crop management. In education, AI can help teachers identify student needs and personalise lesson plans. A study by Anthropic found that 57 per cent of tasks completed by involved human-AI collaboration, not substitution. What can enterprises do to achieve this human amplification with AI? In The Co-Intelligence Revolution, Venkat Ramaswamy and Krishnan Narayanan suggest that every organisation must: Become co-intelligent enterprises, where value is co-created between humans and AI; Reimagine their workers not as passive operators of systems but as creative experiencers (individuals who actively shape and are shaped by their interactions with intelligent technologies); Create a Co-Intelligence Knowledge Environment, where human insights, experiential feedback, and AI-driven suggestions flow dynamically to inform decisions and design. Siemens exemplifies this shift through its industrial metaverse, where engineers, designers, and shopfloor workers collaboratively engage with digital twins and AI co-pilots in a virtual simulation environment. In one compelling instance, a new factory was built entirely in the metaverse before physical construction began. Workers explored the virtual factory, offered feedback on ergonomics, workflows, and safety, and their suggestions were integrated into the final design. The actual factory space, thus, reflected their lived experiences and needs. This approach not only optimised operations but also fostered a sense of ownership, dignity, and well-being among the workers – hallmarks of a truly co-intelligent enterprise. Indian businesses should thoughtfully design co-intelligence into their environments. But markets don't always favour augmentation. Economists Daron Acemoglu and Pascual Restrepo argue that when automation becomes the dominant paradigm, innovation and investment naturally follow it, even when augmentation via co-intelligence may yield higher social benefits. One of the most important policy nudges for the Indian government would be to steer the AI solutions towards augmentation, through public-private partnerships, incentives for augmentation-based innovation, and 'human-in-the-loop-of-AI-systems' design mandates. This holds especially true for contexts where automation may have a high social impact. India can and must shape the trajectory of this emerging general-purpose technology and push the AI ecosystem in a more inclusive direction. Balaraman Ravindran is Head, Wadhwani School of Data Science and AI & Centre for Responsible AI, IIT Madras. Omir Kumar is Policy Analyst, Centre for Responsible AI, IIT Madras. Krishnan Narayanan is Co-founder and President of itihaasa Research and Digital

IIT Madras' Wadhwani School of Data Science, AI partners with Lloyds Technology Centre for AI, ML research
IIT Madras' Wadhwani School of Data Science, AI partners with Lloyds Technology Centre for AI, ML research

Indian Express

time09-07-2025

  • Business
  • Indian Express

IIT Madras' Wadhwani School of Data Science, AI partners with Lloyds Technology Centre for AI, ML research

The Wadhwani School of Data Science and AI (WSAI) at the Indian Institute of Technology Madras (IIT Madras) on Tuesday entered into a strategic partnership with Lloyds Technology Centre to advance deep research and industrial innovation in Artificial Intelligence (AI) and Machine Learning (ML). The agreement aims to strengthen the industry-academia interface by promoting joint research projects and advanced training programmes. The agreement was signed in the presence of Prof Balaraman Ravindran, Head of WSAI, and Sirisha Voruganti, CEO and Managing Director of Lloyds Technology Centre at the IIT Madras campus. Under this initiative, engineers from Lloyds Technology Centre will undergo specialised training in fields including Advanced Data Engineering and AI/ML through certified programmes offered by WSAI. The partnership also includes collaborative research in domains like banking, finance, pensions, insurance, and investments — areas where AI-driven innovation is expected to have transformative impact. 'This collaboration goes beyond a single project,' said Prof Ravindran. 'It is part of our broader vision to build enduring partnerships with industry leaders. With a cohort of Lloyds' scientists and engineers already participating in training programs, we have also begun exploring complex AI challenges together.' Echoing this sentiment, Sirisha Voruganti emphasised the value of the collaboration for both learning and innovation. 'As a technology and data company, our focus is on delivering better and faster solutions to our customers. This partnership with IIT Madras allows us to work closely with world-class researchers and build smarter predictive models.' 'With a broad range of interests spanning pension, insurance, investments and more, we are very excited to delve down and collaborate with the researchers at the institute for building smarter predictive models. We are also ensuring the learning journey is continuous and sponsoring our employees to come to the Wadhwani School of Data Science and AI and get certified in advanced AI courses,' Voruganti added.

AI: Where demand for courses is high, but supply of teachers, poor
AI: Where demand for courses is high, but supply of teachers, poor

The Hindu

time29-06-2025

  • Science
  • The Hindu

AI: Where demand for courses is high, but supply of teachers, poor

In the past three years, the intake in Artificial Intelligence (AI) courses has almost doubled in Tamil Nadu. It is expected to be at its highest this year. Looking at only the AI and Data Science course at the undergraduate level, the intake has gone up from 7,049 in 2022-23 to 15,702 in 2024-25. The number of courses on AI has grown proportionately. According to estimates, there are 800 courses on AI offered by institutions across India, said Balaraman Ravindran, head, Department of Data Science and Artificial Intelligence, Indian Institute of Technology-Madras. However, there aren't enough faculty members with a qualified AI background to teach these courses. Besides, Computer Science, which provides the foundational tools for AI, does not form the core component of the emerging field since AI requires heavy reliance on mathematical concepts and principles to develop algorithms, he said. It is, therefore, pertinent that faculty members teaching AI have to be from an AI background or should equip themselves through Faculty Development Programmes (FDPs). Subalalitha C.N., professor, Department of Computing Technologies, SRM Institute of Science and Technology (SRMIST), concurred with this view. SRMIST offers 10 different AI courses, and finding faculty members has been a challenge. 'Certifications and FDPs are the ways through which the faculty members are equipping themselves, especially since more students are opting for AI courses,' she said. There is also a move towards integrating AI with other core engineering subjects. Such multidisciplinary approach, she said, has also attracted funding from institutions. 'AI is full of maths. All algorithms have core mathematics in their background. Hence, the integration is easy,' she said. Also, the requirements of different professionals working on AI — researchers, faculty members, or developers — are different. Those developing applications may not require heavy reliance on mathematics, but those developing language models would require it. 'They would align according to their priority or preference,' she added. The dearth of AI teachers is a matter of concern. Experts say collaboration with industry could bridge the gap to some extent, but there should be a road map for faculty development. Besides the training programmes for the faculty members, efforts are needed to attract talented youngsters proficient in AI to academics.

Understanding shift from AI Safety to Security, and India's opportunities
Understanding shift from AI Safety to Security, and India's opportunities

Indian Express

time08-05-2025

  • Business
  • Indian Express

Understanding shift from AI Safety to Security, and India's opportunities

Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store