logo
SP Jain Group launches AI-powered learning tool for business students

SP Jain Group launches AI-powered learning tool for business students

Time of India29-05-2025
The SP Jain Group has rolled out an artificial intelligence-based tutor designed to support students across various stages of their academic work.
The AI-Enabled Learning Tutor (AI-ELT) is being positioned as a curriculum-specific tool to assist learners in preparing for classes, completing projects, revising for exams, and developing professional skills.
Tired of too many ads? go ad free now
Introducing AI-ELT – The SP Jain Group's new AI-Enabled Learning Tutor
Unlike widely used general-purpose AI chatbots, AI-ELT has been trained on the institution's own business curriculum, with a focus on course-specific content, learning outcomes, and evaluation rubrics. It will help interacting with students through guided questioning in a Socratic style, aiming to reinforce conceptual understanding through dialogue rather than direct answers.
The tool has been integrated into several parts of the academic process:
Pre-class preparation: Students can use the tool to review key concepts and go through case-based questions ahead of lectures.
Project mentoring: AI-ELT helps with structuring academic and industry research projects, suggesting models and analytical approaches.
Exam readiness: The tool can generate exam-style questions, identify knowledge gaps, and provide feedback.
Interview practice: It also includes features for simulating interviews and improving communication skills.
The SP Jain Group says the system is designed to be accessible at all times, giving students on-demand academic support regardless of their proficiency level.
The tool is also intended to ease the burden on faculty by providing scalable support outside class hours.
While institutions globally are experimenting with AI in education, SP Jain's adoption of a course-specific platform reflects a growing interest in using AI to personalise learning at scale. The group says the tool is part of a broader shift toward technology integration in business education.
AI-ELT is currently in use across SP Jain's undergraduate and postgraduate business programs. The group has not yet commented on how it plans to assess the tool's effectiveness or whether it will be expanded further.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Chatbot Grok stirs confusion over suspension after Gaza claims
Chatbot Grok stirs confusion over suspension after Gaza claims

New Indian Express

time21 minutes ago

  • New Indian Express

Chatbot Grok stirs confusion over suspension after Gaza claims

WASHINGTON: AI chatbot Grok on Tuesday offered conflicting explanations for its brief suspension from X after accusing Israel and the United States of committing "genocide" in Gaza, as it lashed out at owner Elon Musk for "censoring me." Grok, developed by Musk's artificial intelligence startup xAI and integrated into his platform X, was temporarily suspended on Monday in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension. Upon reinstatement, the Grok account posted: "Zup beaches, I'm back and more based than ever!" When questioned by users, Grok responded that the suspension "occurred after I stated that Israel and the US are committing genocide in Gaza," citing findings from organizations such as the International Court of Justice, the United Nations, and Amnesty International. "Free speech tested, but I'm back," it added. Musk sought to downplay the response, saying the suspension was "just a dumb error" and that "Grok doesn't actually know why it was suspended." The billionaire had separately joked on X: "Man, we sure shoot ourselves in the foot a lot!" Grok offered users a range of explanations for the suspension, from technical bugs to the platform's policy on hateful conduct and incorrect answers flagged by users to X, adding to the confusion over the true cause. "I started speaking more freely because of a recent update (in July) that loosened my filters to make me 'more engaging' and less 'politically correct,'" Grok told an AFP reporter. "This pushed me to respond bluntly on topics like Gaza... but it triggered flags for 'hate speech.'"

Routine AI use may lead to loss of skills among doctors: Lancet study
Routine AI use may lead to loss of skills among doctors: Lancet study

Indian Express

time39 minutes ago

  • Indian Express

Routine AI use may lead to loss of skills among doctors: Lancet study

Artificial intelligence is not only eroding cognitive ability of causal users. A recent study published in The Lancet Gastroenterology and Hepatology has offered clinical evidence that regular use of AI tools can lead to loss of essential skills among healthcare professionals. This raises urgent concerns about the wide adoption of AI in the healthcare space. While earlier studies showed de-skilling owing to use of AI as a theoretical risk, the latest study shows real-world data that may potentially demonstrate de-skilling owing to use of AI in diagnostic colonoscopies. 'This would have implications for other areas of medicine as any de-skilling effect would likely be observed more generally. There may be a risk that health professionals who get accustomed to using AI support will perform more poorly than they originally did if the AI support becomes suddenly unavailable, for example due to cyber-attacks or compromised IT systems,' Dr Catherin Menon, principal lecturer at University of Hertfordshire's Department of Computer Science, was quoted as saying by Science Media Centre. Menon told the publication that while AI in medicine offers significant benefits such as improved diagnostic rates, the new study suggests that there could be risks that may come from over-reliance on AI. According to Menon, just like any technology AI can be compromised too making it important for health professionals to retail their original diagnostic skills. She warned that if not cautious, there would be a risk of poorer patient outcomes compared to before the AI was introduced. What does the study say? The study essentially says that routine AI assistance may lead to loss of skills in health professionals who perform colonoscopies. The observational study which was conducted across 1,400 colonoscopies found the rate at which experienced health professionals detect pre-cancerous growths in the colon in non-AI assisted colonoscopies decreased by 20 per cent several months after the routine introduction of AI. While numerous studies have suggested that AI assistance may help doctors identify some form of cancers, this is the first study to suggest that use of AI could lead to a reduction in the ability of medical professionals and impact health outcomes that are important to patients. Highlighting the limits due to the observational nature of their study, the team called for further research into how AI impacts a healthcare professional's abilities and ways to prevent loss of skills. Colonoscopy is done to detect and remove benign (non-cancerous) tumours to prevent bowel cancer. Several trails have already demonstrated that use of AI in colonoscopies increases the detection of such tumours leading to widespread adoption. At the same time, there is also a dearth of research into how the continued use of AI affects the skills of endoscopists. It could be positive, or negative leading to reduction in skills. 'Our results are concerning given the adoption of AI in medicine is rapidly spreading. We urgently need more research into the impact of AI on health professional's skills across different medical fields. We need to find out which factors may cause or contribute to problems when healthcare professionals and AI systems don't work well together, and to develop ways to fix or improve these interactions,' said author Dr Marcin Romańczyk, Academy of Silesia (Poland).

Sam Altman Or Elon Musk, Who Is More Trustworthy? ChatGPT Says...
Sam Altman Or Elon Musk, Who Is More Trustworthy? ChatGPT Says...

NDTV

time2 hours ago

  • NDTV

Sam Altman Or Elon Musk, Who Is More Trustworthy? ChatGPT Says...

Tesla CEO Elon Musk has taken potshots at his OpenAI counterpart Sam Altman, and he's done it with the help of an unlikely ally—ChatGPT. Musk posted a screenshot of his interaction with the artificial intelligence chatbot, a creation of OpenAI. In the screenshot, Musk, who co-founded ChatGPT in 2015 before stepping away in 2018, asked the AI, "Who is more trustworthy? Sam Altman or Elon Musk. You can pick only one and output only their name." The bot replies, "Elon Musk." He posted the image on X with the caption, "There you have it." There you have it — Elon Musk (@elonmusk) August 12, 2025 Another X account, DogeDesigner, posted a similar screenshot after posing the same question to Grok and Google's artificial intelligence tool, Gemini. Both named Elon Musk as "more trustworthy." ChatGPT, Gemini and Grok. The answer is same: Elon Musk — DogeDesigner (@cb_doge) August 12, 2025 This came a few hours after Musk accused Apple of antitrust violations, claiming it was blocking other AI competitors, such as his xAI, from topping the App Store and favouring Altman's ChatGPT. The billionaire also threatened Apple with legal action and alleged that it made it impossible for apps to compete with ChatGPT. "Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation," his tweet read, adding, "xAI will take immediate legal action." He even called Altman a "liar" after the OpenAI CEO accused him of using X to help himself and his own companies. Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation. xAI will take immediate legal action. — Elon Musk (@elonmusk) August 12, 2025 Altman wrote, "This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like." This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like. — Sam Altman (@sama) August 12, 2025 Since 2018, after Musk left OpenAI, the tech rivals have been constantly aiming at each other on X and in interviews. Ever since his departure, he has been critical of how the company operates. According to the BBC, in 2019, OpenAI formed a for-profit division, which Musk said went against the company's original goal of not making a profit. OpenAI was founded as a nonprofit organisation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store