
AI ethics & employability
For business schools, this isn't just a curriculum challenge—it's a moral and strategic imperative.
Machines now write emails, analyse markets, and even diagnose diseases. But while we celebrate these efficiencies, we must also confront a more sobering reality—our reliance on AI is growing faster than our understanding of its ethical boundaries. Many students entering the workforce today can prompt ChatGPT with ease, yet struggle to question the fairness of an algorithm or recognise when automation replaces empathy.
This is where business education must evolve—not just to teach how AI works, but to ask why it should be used, who it serves, and what it might displace.
At institutions like the University of Massachusetts Amherst and University of Colorado Boulder, business schools are already taking bold steps—launching dedicated courses on AI ethics, building multi-stakeholder committees, and embedding GenAI tools into foundational coursework. At FIIB, we too are reflecting deeply on how AI integration must be as much about critical thinking and conscience as it is about technical proficiency.
Because at the heart of this revolution lies a powerful truth: AI is created by humans—and it inherits our flaws.
From insurance companies using opaque algorithms to deny claims, to marketing departments unknowingly lifting copyrighted content, the ethical dilemmas are real and rising. Bias in datasets, lack of transparency, and the black-box nature of AI decision-making demand that we teach students not just to use these tools—but to challenge them, audit them, and lead ethically through them.
Historically, universities have been centres of knowledge transmission. But in the AI age, they must also become centres of knowledge navigation—places where students learn how to live and lead in a world where human and machine intelligence coexist, often contentiously.
We need to expand the traditional triad of teaching, research, and service into a more dynamic ecosystem—one that fuses academic rigor with industry relevance and social responsibility. AI shouldn't just be a vertical within IT electives; it should become a horizontal theme cutting across marketing, operations, finance, and strategy.
The National Education Policy (NEP) 2020 in India provides a timely foundation for this shift. By encouraging multidisciplinary thinking, innovation labs, and industry-academia collaboration, the NEP invites institutions to evolve beyond silos and reimagine themselves as hubs of real-world problem-solving.
India is at a critical inflection point. With declining university-age demographics, rapid industrial shifts, and growing global competition, universities cannot afford to stand still. They must embed AI thinking across disciplines, establish centres of excellence, collaborate with industry to shape demand-driven curricula, and foster faculty development programmes that ensure educators are as AI-aware as their students.
This transformation need not be expensive—it must be intentional.
We've already seen industry giants like Microsoft and Google partner with Indian institutions to create AI upskilling initiatives. But what we need next is a coordinated national strategy—one that recognises business schools not just as talent factories, but as ethics incubators and policy influencers.
Amid all this change, one thing must remain clear: AI should not be used to replace human intelligence—it should enhance it. And that enhancement must include empathy, diversity, fairness, and inclusion.
We must teach our students that while AI may accelerate analysis, it cannot replace curiosity. It may automate tasks, but not trust. And it may generate content, but not character.
Let us not create a generation of professionals who can code without conscience or automate without accountability. Let us instead nurture responsible leaders who understand that the future of business is not just digital—it's deeply human.
In the race to keep up with AI, business schools must not just adapt—they must lead. And they must do so with boldness, foresight, and a renewed commitment to building a world where technology serves humanity—not the other way around.
This article is authored by Radhika Shrivastava, CEO and president, FIIB.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
2 hours ago
- Hindustan Times
Now, AI vigil on garbage collection, encroachment, too
Prayagraj Municipal Corporation (PMC) has adopted artificial intelligence (AI)-based monitoring of garbage, potholes and encroachments on main roads and bylanes on a trial basis, Prayagraj Smart City manager (IT) Mani Shanker Tripathi said. (For representation) It has installed AI-equipped cameras on its two quick response team (QRT) vehicles. The AI would process information regarding unattended garbage, potholes and encroachments feed them to a server. The cameras have a three-meter-wide angle besides front view capacity of up to 50 meters. 'So far, the trial is yielding positive results. It has traced, analysed and reported information about potholes, unattended garbage dumped on roadsides besides bylanes, debris, uneven manhole covers and defunct streetlights. The results provided by the AI were cross-checked through physical verification and were found to be error-free,' he said. As per the official, if the trial is successful, such cameras would then be installed in all garbage collection vans used in the city's 80 wards. Meanwhile, additional municipal commissioner Deepshikha Pandey said the new set-up would not only help update PMC officials about daily collection of garbage from each and every road, but also help maintain a vigil on roadside encroachments and dumping of debris. This would help improve the ranking of the city in the next Swachh survey, she added Also, Prayagraj is the first city in the state where such a monitoring system is introduced, officials said. Presently, 70 km of major roads are being monitored with the help of AI.
&w=3840&q=100)

Business Standard
3 hours ago
- Business Standard
China's Unitree Robotics offers a humanoid robot for under $6,000
The startup, among the frontrunners in Chinese robotics, on Friday announced its R1 bot with a starting price of 39,900 yuan (or $5,900) Bloomberg By Bloomberg News Unitree Robotics is marketing one of the world's first humanoid robots for under $6,000, drastically reducing the entry price for what's expected to grow into a whole wave of versatile AI machines for the workplace and home. The startup, among the frontrunners in Chinese robotics, on Friday announced its R1 bot with a starting price of 39,900 yuan (or $5,900). The machine weighs just 25kg and has 26 joints, the company said in a video posted to WeChat. It's equipped with multimodal artificial intelligence that includes voice and image recognition. The four-figure price tag highlights the ambitions of a new generation of startups trying to leapfrog the US in a groundbreaking technology. Unitree rose to prominence in February after CEO Wang Xingxing joined big names like Alibaba Group Holding Ltd.'s Jack Ma and Tencent Holdings Ltd.'s Pony Ma at a widely publicized summit with Chinese President Xi Jinping. The new robot's launch coincides with China's biggest AI forum, set to kick off this weekend with star founders, Beijing officials and AI-hungry venture investors converging in Shanghai. The World Artificial Intelligence Conference will bring together many of the key figures expected to drive China's efforts around AI, which is finding a physical expression in the rapid development of more humanoid robots. After decades of dominance by American companies like Boston Dynamics, Chinese companies are pushing ahead with humanoids for factories, households and even military use. Pricing is crucial to their proliferation. Unitree's older G1 robot, which found a home in research labs and schools, was priced at $16,000. A more advanced and larger H1 model goes for $90,000-plus. Rival UBTech Robotics Corp. said recently that it planned a $20,000 humanoid robot that can serve as a household companion this year, seeking to expand beyond factories. If it works as advertised, Unitree's new robot would mark a milestone for the robotics industry, particularly when it comes to complex humanoids. Morgan Stanley Research estimates that the cost of the most-sophisticated humanoid in 2024 was around $200,000.


Hindustan Times
3 hours ago
- Hindustan Times
ChatGPT therapy chats are not private, warns OpenAI CEO Sam Altman
More people are using ChatGPT like a therapist, but that doesn't mean it's private. OpenAI CEO Sam Altman says those kinds of chats don't have the same legal protections you get with real therapists, doctors, or lawyers. OpenAI says deleted chats from Free, Plus, and Pro users are wiped within 30 days unless they're legally required to keep them for "legal or security reasons."(AP) Altman told podcaster Theo Von, 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up.' He went on, 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's like legal privilege for it — there's doctor-patient confidentiality, there's legal confidentiality,' according to Business Insider report. 'We haven't figured that out yet for when you talk to ChatGPT.' Altman said there should be the 'same concept of privacy for your conversations with AI that we do with a therapist' and that it should be 'addressed with some urgency.' Also Read: 'Will AI replace lawyers? Law intern asks to use ChatGPT for witness analysis, gets hard copies instead Youngsters turning to ChatGPT for therapy He said a growing number of people — especially younger users — are turning to ChatGPT for therapy, life advice, or help with relationships. Altman said, 'No one had to think about that even a year ago, and now I think it's this huge issue of like, 'How are we gonna treat the laws around this?'' Unlike end-to-end encrypted apps like WhatsApp or Signal, OpenAI can read your conversations with ChatGPT. Employees sometimes look at chats to improve the AI or to watch for misuse. OpenAI says deleted chats from Free, Plus, and Pro users are wiped within 30 days unless they're legally required to keep them for "legal or security reasons." Back in June, The New York Times and other media outlets asked a court to force OpenAI to save all user chats, even deleted ones, as part of a copyright lawsuit. OpenAI is now appealing that court order.