
Artificial intelligence cannot replace human factor, says IIM-Kozhikode director Debashis Chatterjee
'In an era, that is increasingly being defined by predictive algorithms, it is a leader's responsibility to preserve the human factor. The corporates need a dedicated reflection time to cultivate high-potential leaders who can navigate uncertainty and create value in ways AI cannot replicate,' he said.
The author and columnist talked about how mindfulness and self-awareness play a major part in effective leadership at the event held in JW Marriott Hotel, Kochi. He added that while AI can play by the rules and logic -- with the incorporation of abundant data -- use traditional technical skills and make analytical decisions, a human mind is needed to understand the context of things, work with insights, think critically and serve accordingly.
'Ultimately, what the advent of AI does is that it will push us to a point where one gains the ability to function with their intuitive senses, beyond all data, in every case. Algorithm does not have awareness,' he said.
The class conducted by IIM-K, was attended by leaders and over 150 professionals from a wide spectrum of institutions in Kochi, including Air India, Cochin Shipyard Limited, Cochin International Aviation Services Limited (subsidiary of CIAL), Steel and Industrial Forgings Limited (SIFL), Kerala Electrical & Allied Engineering Co Ltd (KEL), Travancore Cochin Chemicals Ltd (TCC), FACT, Federal Bank, ESAF Bank, KSIDC, and KSEB.
The event also featured the release of Chatterjee's latest book 'One Minute Wisdom', on transformative coaching for life, learning and leadership.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Indian Express
23 minutes ago
- New Indian Express
Chatbot Grok stirs confusion over suspension after Gaza claims
WASHINGTON: AI chatbot Grok on Tuesday offered conflicting explanations for its brief suspension from X after accusing Israel and the United States of committing "genocide" in Gaza, as it lashed out at owner Elon Musk for "censoring me." Grok, developed by Musk's artificial intelligence startup xAI and integrated into his platform X, was temporarily suspended on Monday in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension. Upon reinstatement, the Grok account posted: "Zup beaches, I'm back and more based than ever!" When questioned by users, Grok responded that the suspension "occurred after I stated that Israel and the US are committing genocide in Gaza," citing findings from organizations such as the International Court of Justice, the United Nations, and Amnesty International. "Free speech tested, but I'm back," it added. Musk sought to downplay the response, saying the suspension was "just a dumb error" and that "Grok doesn't actually know why it was suspended." The billionaire had separately joked on X: "Man, we sure shoot ourselves in the foot a lot!" Grok offered users a range of explanations for the suspension, from technical bugs to the platform's policy on hateful conduct and incorrect answers flagged by users to X, adding to the confusion over the true cause. "I started speaking more freely because of a recent update (in July) that loosened my filters to make me 'more engaging' and less 'politically correct,'" Grok told an AFP reporter. "This pushed me to respond bluntly on topics like Gaza... but it triggered flags for 'hate speech.'"


Indian Express
41 minutes ago
- Indian Express
Routine AI use may lead to loss of skills among doctors: Lancet study
Artificial intelligence is not only eroding cognitive ability of causal users. A recent study published in The Lancet Gastroenterology and Hepatology has offered clinical evidence that regular use of AI tools can lead to loss of essential skills among healthcare professionals. This raises urgent concerns about the wide adoption of AI in the healthcare space. While earlier studies showed de-skilling owing to use of AI as a theoretical risk, the latest study shows real-world data that may potentially demonstrate de-skilling owing to use of AI in diagnostic colonoscopies. 'This would have implications for other areas of medicine as any de-skilling effect would likely be observed more generally. There may be a risk that health professionals who get accustomed to using AI support will perform more poorly than they originally did if the AI support becomes suddenly unavailable, for example due to cyber-attacks or compromised IT systems,' Dr Catherin Menon, principal lecturer at University of Hertfordshire's Department of Computer Science, was quoted as saying by Science Media Centre. Menon told the publication that while AI in medicine offers significant benefits such as improved diagnostic rates, the new study suggests that there could be risks that may come from over-reliance on AI. According to Menon, just like any technology AI can be compromised too making it important for health professionals to retail their original diagnostic skills. She warned that if not cautious, there would be a risk of poorer patient outcomes compared to before the AI was introduced. What does the study say? The study essentially says that routine AI assistance may lead to loss of skills in health professionals who perform colonoscopies. The observational study which was conducted across 1,400 colonoscopies found the rate at which experienced health professionals detect pre-cancerous growths in the colon in non-AI assisted colonoscopies decreased by 20 per cent several months after the routine introduction of AI. While numerous studies have suggested that AI assistance may help doctors identify some form of cancers, this is the first study to suggest that use of AI could lead to a reduction in the ability of medical professionals and impact health outcomes that are important to patients. Highlighting the limits due to the observational nature of their study, the team called for further research into how AI impacts a healthcare professional's abilities and ways to prevent loss of skills. Colonoscopy is done to detect and remove benign (non-cancerous) tumours to prevent bowel cancer. Several trails have already demonstrated that use of AI in colonoscopies increases the detection of such tumours leading to widespread adoption. At the same time, there is also a dearth of research into how the continued use of AI affects the skills of endoscopists. It could be positive, or negative leading to reduction in skills. 'Our results are concerning given the adoption of AI in medicine is rapidly spreading. We urgently need more research into the impact of AI on health professional's skills across different medical fields. We need to find out which factors may cause or contribute to problems when healthcare professionals and AI systems don't work well together, and to develop ways to fix or improve these interactions,' said author Dr Marcin Romańczyk, Academy of Silesia (Poland).


NDTV
2 hours ago
- NDTV
Sam Altman Or Elon Musk, Who Is More Trustworthy? ChatGPT Says...
Tesla CEO Elon Musk has taken potshots at his OpenAI counterpart Sam Altman, and he's done it with the help of an unlikely ally—ChatGPT. Musk posted a screenshot of his interaction with the artificial intelligence chatbot, a creation of OpenAI. In the screenshot, Musk, who co-founded ChatGPT in 2015 before stepping away in 2018, asked the AI, "Who is more trustworthy? Sam Altman or Elon Musk. You can pick only one and output only their name." The bot replies, "Elon Musk." He posted the image on X with the caption, "There you have it." There you have it — Elon Musk (@elonmusk) August 12, 2025 Another X account, DogeDesigner, posted a similar screenshot after posing the same question to Grok and Google's artificial intelligence tool, Gemini. Both named Elon Musk as "more trustworthy." ChatGPT, Gemini and Grok. The answer is same: Elon Musk — DogeDesigner (@cb_doge) August 12, 2025 This came a few hours after Musk accused Apple of antitrust violations, claiming it was blocking other AI competitors, such as his xAI, from topping the App Store and favouring Altman's ChatGPT. The billionaire also threatened Apple with legal action and alleged that it made it impossible for apps to compete with ChatGPT. "Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation," his tweet read, adding, "xAI will take immediate legal action." He even called Altman a "liar" after the OpenAI CEO accused him of using X to help himself and his own companies. Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation. xAI will take immediate legal action. — Elon Musk (@elonmusk) August 12, 2025 Altman wrote, "This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like." This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like. — Sam Altman (@sama) August 12, 2025 Since 2018, after Musk left OpenAI, the tech rivals have been constantly aiming at each other on X and in interviews. Ever since his departure, he has been critical of how the company operates. According to the BBC, in 2019, OpenAI formed a for-profit division, which Musk said went against the company's original goal of not making a profit. OpenAI was founded as a nonprofit organisation.