logo
The chatbot will see you now

The chatbot will see you now

Mint11 hours ago

It's right there in your pocket. The AI assistant on your phone won't ask you to make an appointment. It won't lose patience, and it'll eagerly give you the attention you deserve. But would you risk consulting ChatGPT, Gemini, or any of the multitude of chatbots out there for health advice?
I did. And it was worth it.
Not long ago, I noticed a pain on the inner side of my left arm that had become dramatically worse. Being a catastrophiser, I instantly thought: heart attack. Luckily, I didn't go so far as to call an ambulance. Instead, I consulted Grok Doc—what xAI calls one of Grok's voice modes.
I described the pain in detail and asked if I could be having a heart attack. Grok quizzed me: How long had the pain been there? Was there bruising or swelling? Had I strained that arm in particular? A dozen questions followed. I'd had the pain for over three weeks, so Grok confidently ruled out a heart attack.
Read this | Mint Primer: Dr AI is here, but will the human touch go away?
Because Grok had asked, I looked closely and noticed there was definitely swelling. And then, in a flash, I realised what had happened—an over-enthusiastic lab technician had made my arm hurt during a recent blood test. I told Grok, and it suggested using an ice pack thrice a day and, if really needed, taking an anti-inflammatory—within dosage limits. It told me to rest the arm and come back in two or three days, for all the world as if it were really a doctor.
I'm afraid I was too lazy for the ice packs and didn't bother with the anti-inflammatory, but I was happy to rest the arm—as well as the rest of me. The pain gradually eased.
Despite disclaimers about not replacing doctors, AI assistants will give health-related advice if you ask. That's not what Gemini, ChatGPT, Grok and their ilk are meant for, but since they can scan reputable sources in seconds, their advice can often be helpful—if the user is careful in framing questions and responsible in following through. These tools are still prone to hallucinations, and they're only as good as their training data, so they can get things wrong.
Read this | Mint Primer | Who is liable if a friendly chatbot 'abets' suicide?
Luckily, there are ways to manage that risk. You can cross-check with another AI chatbot. Ask the AI to verify anything that seems off. Even better, run it by your doctor to see how on-point the suggestions usually are.
It's wise to be cautious, but there's no need to throw the baby out with the bathwater. I fed ChatGPT my test reports and had it explain each parameter in detail. Take uric acid, for instance—it was within range and not flagged by the lab, but on the higher side of normal. Rather than wait for it to spike, I asked ChatGPT how I could bring it down a bit. The AI gave me dietary tips and even a workout schedule, complete with specific exercises.
Frankly, AI's advice beat anything I've received from the dietician's assistant at the doctor's clinic, who usually hands over a generic, one-size-fits-all restrictive regime. Chat assistants, on the other hand, can offer deeply personalised advice. Share your height, weight, vulnerabilities and preferences, and you'll get practical, tailored guidance.
Depending on the chatbot, you can even ask for a printable version of the recommendations—handy if you want to take it along to a doctor's appointment. You can request a list of questions to ask during the visit, too.
One underrated feature of some chat assistants is the ability to go live with the camera. You can turn it on, show Gemini or ChatGPT your meal, and get information like calories, glycemic index, and more. It's surprisingly useful—most users don't even know this exists. You could also show the AI an injury and get suggestions, though that edges into riskier territory.
Also read | The brain behind Generative AI has his sights set on India
Needless to say, anything remotely serious should be taken to a medical professional. But for supportive advice, don't rule out the chatbot.
Meanwhile, I know I'll be forever wary of lab technicians.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The chatbot will see you now
The chatbot will see you now

Mint

time11 hours ago

  • Mint

The chatbot will see you now

It's right there in your pocket. The AI assistant on your phone won't ask you to make an appointment. It won't lose patience, and it'll eagerly give you the attention you deserve. But would you risk consulting ChatGPT, Gemini, or any of the multitude of chatbots out there for health advice? I did. And it was worth it. Not long ago, I noticed a pain on the inner side of my left arm that had become dramatically worse. Being a catastrophiser, I instantly thought: heart attack. Luckily, I didn't go so far as to call an ambulance. Instead, I consulted Grok Doc—what xAI calls one of Grok's voice modes. I described the pain in detail and asked if I could be having a heart attack. Grok quizzed me: How long had the pain been there? Was there bruising or swelling? Had I strained that arm in particular? A dozen questions followed. I'd had the pain for over three weeks, so Grok confidently ruled out a heart attack. Read this | Mint Primer: Dr AI is here, but will the human touch go away? Because Grok had asked, I looked closely and noticed there was definitely swelling. And then, in a flash, I realised what had happened—an over-enthusiastic lab technician had made my arm hurt during a recent blood test. I told Grok, and it suggested using an ice pack thrice a day and, if really needed, taking an anti-inflammatory—within dosage limits. It told me to rest the arm and come back in two or three days, for all the world as if it were really a doctor. I'm afraid I was too lazy for the ice packs and didn't bother with the anti-inflammatory, but I was happy to rest the arm—as well as the rest of me. The pain gradually eased. Despite disclaimers about not replacing doctors, AI assistants will give health-related advice if you ask. That's not what Gemini, ChatGPT, Grok and their ilk are meant for, but since they can scan reputable sources in seconds, their advice can often be helpful—if the user is careful in framing questions and responsible in following through. These tools are still prone to hallucinations, and they're only as good as their training data, so they can get things wrong. Read this | Mint Primer | Who is liable if a friendly chatbot 'abets' suicide? Luckily, there are ways to manage that risk. You can cross-check with another AI chatbot. Ask the AI to verify anything that seems off. Even better, run it by your doctor to see how on-point the suggestions usually are. It's wise to be cautious, but there's no need to throw the baby out with the bathwater. I fed ChatGPT my test reports and had it explain each parameter in detail. Take uric acid, for instance—it was within range and not flagged by the lab, but on the higher side of normal. Rather than wait for it to spike, I asked ChatGPT how I could bring it down a bit. The AI gave me dietary tips and even a workout schedule, complete with specific exercises. Frankly, AI's advice beat anything I've received from the dietician's assistant at the doctor's clinic, who usually hands over a generic, one-size-fits-all restrictive regime. Chat assistants, on the other hand, can offer deeply personalised advice. Share your height, weight, vulnerabilities and preferences, and you'll get practical, tailored guidance. Depending on the chatbot, you can even ask for a printable version of the recommendations—handy if you want to take it along to a doctor's appointment. You can request a list of questions to ask during the visit, too. One underrated feature of some chat assistants is the ability to go live with the camera. You can turn it on, show Gemini or ChatGPT your meal, and get information like calories, glycemic index, and more. It's surprisingly useful—most users don't even know this exists. You could also show the AI an injury and get suggestions, though that edges into riskier territory. Also read | The brain behind Generative AI has his sights set on India Needless to say, anything remotely serious should be taken to a medical professional. But for supportive advice, don't rule out the chatbot. Meanwhile, I know I'll be forever wary of lab technicians.

No gym, no fees, only ChatGPT: Woman shares free AI apps for fitness diets, workouts, health tracking
No gym, no fees, only ChatGPT: Woman shares free AI apps for fitness diets, workouts, health tracking

Time of India

time19 hours ago

  • Time of India

No gym, no fees, only ChatGPT: Woman shares free AI apps for fitness diets, workouts, health tracking

In a growing shift towards tech-driven fitness solutions, a new trend is emerging — people are replacing traditional gym instructors with artificial intelligence. The combination of accessibility, customization, and cost-effectiveness has made AI tools an increasingly popular alternative to personal trainers. From building strength to managing diets, individuals are using free AI apps like ChatGPT to take full control of their health and fitness routines. Turning to AI for Fitness and Motivation For many, staying fit has long been tied to expensive gym memberships, group classes, and branded activewear. However, these costs don't always translate into consistent results. In one such case, a woman decided to rework her entire approach by using ChatGPT and other AI tools to manage her workouts, track progress, and design meal plans. She reported becoming stronger, leaner, and more self-disciplined — all while saving hundreds of dollars over time, as per Business Insider. Initially reliant on a gym instructor for motivation and structure, she experienced a drop in consistency when her trainer left. Attempts to switch gyms or test alternatives failed to yield long-term commitment. Despite trying habit-tracking methods and motivational purchases, her fitness goals remained out of reach until she decided to build her own routine with AI's support. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like A brain tumor threatens his life. Please save him. Donate For Health Donate Now Undo ChatGPT and Free Apps That Deliver Results With prior experience using ChatGPT and Perplexity AI for daily tasks like recipes and trip planning, she repurposed the tool to create customized workout routines. Each week, she entered her progress and goals into ChatGPT and received updated training plans. This not only kept her accountable but also allowed her to adapt workouts based on her own feedback — something she hadn't achieved even with in-person trainers. She also turned to ChatGPT to better understand the science behind nutrition, including protein requirements for muscle growth. This led her to use additional free apps like Cronometer, which helped her track calories, macronutrients, and even micronutrients such as iron and vitamins. Another recommended app, Hevy, enabled her to log workout reps in real-time, while Gymmade offered animated demonstrations for weight training — giving her confidence in using equipment effectively. Within weeks, she began to notice tangible results. Her muscle tone improved, she doubled her lifting weight, and her energy levels increased. She even discontinued her gym membership in favor of using a free local outdoor facility. Importantly, the shift was not just physical but mental — she credited AI tools for helping her understand discipline, motivation, and habit formation more deeply. AI Fitness: Growing Popularity, But Caution Urged Her experience mirrors that of others. Siliguri-based Avirup Nag, who preferred vegetarian diet plans, and Mumbai's Shantanu Pednekar, who needed help with portion control, also found success using AI. New Delhi's Anjana PV used AI to modify her plan when dealing with physical discomfort and received tailored suggestions to adjust her workout without overexertion. For these users, AI acted like a virtual coach — offering structure, detailed feedback, and real-time guidance. Experts agree that AI has made it possible to personalise workouts based on individual goals and fitness levels. But they also caution users not to rely entirely on it. Fitness professionals like Kushal Pal Singh of Anytime Fitness note that while AI can create routines, it can't monitor form or prevent injury like a human trainer can. Benefits and Boundaries Users widely praise AI for being cost-effective and convenient. As Anjana pointed out, she was paying Rs 3,500 per month for a personal trainer before realizing AI was offering nearly identical guidance — for free. Still, health professionals warn that AI tools are only as good as the data provided to them. They may miss nuances such as movement issues, injury risks, or dietary preferences unless manually addressed. Nutritionist Muskan Soni highlighted that AI-generated meal plans often lack variety and fail to consider mental and physical health factors. This can lead to diet fatigue or plans that don't suit individual lifestyles. Despite its limitations, AI's ability to provide structure, research-backed information, and real-time adjustments has made it a powerful tool for self-driven fitness.

Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.
Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.

Time of India

timea day ago

  • Time of India

Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.

Live Events Suicidal risk identification on SNS: The prompts fed to AI do not remain confined to tasks related to needing help in everyday activities, such as asking Alexa to play the family's favourite song, asking Siri on a random Tuesday to set a reminder, or asking Google Assistant to search the song based on humming. But what if users, in an especially low moment, were to ask, 'Am I okay?' Or maybe other such prompts that insinuate the user's want to harm themselves, whether through means of self-harm or and suicide attempts remain alarmingly prevalent, requiring more effective strategies to identify and support individuals at high risk. Current methods of suicide risk assessment largely rely on direct questioning, which can be limited by subjectivity and inconsistent interpretation. Simply put, their accuracy and predictive value remain limited, regardless of the large variety of scales that can be used to assess the risk; predictability remains unimproved over the past 50 intelligence and machine learning offer new ways to improve risk detection, but their accuracy depends heavily on access to large datasets that can help identify patient profiles and key risk factors. As outlined in a clinical review, AI tools can help identify patterns in the dataset, generate risk algorithms, and determine the effect of risk and protective factors on suicide. The use of AI reassures healthcare professionals with an improved accuracy rate, especially when combined with their skills and expertise, even when diagnostic accuracy could never reach 100%.According to Burke et al. , there are three main goals of machine learning studies in suicide: the first is improving the accuracy of risk prediction, the second is identifying important predictors and the interaction between them, and the last one is to model subgroups of patients. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention, while at a population level, the algorithm could find groups at risk and individuals at risk of suicide attempts within these groups. Social media platforms have become both the cause and solution for the mental health crisis. While they are often criticized for contributing to the mental health crisis, these platforms also provide a rich source of real-time data to AI, enabling it to identify individuals portraying signs of suicidal intent. This is achieved by analyzing users' posts, comments, and behavioral patterns, allowing AI tools to detect linguistic cues, such as expressions of hopelessness or other emotional signals that may indicate psychological distress. For instance, Meta employs AI algorithms to scan user content and identify signs of distress, allowing the company to reach out and offer support or even connect users with crisis helplines. Studies such as those by the Black Dog Institute also demonstrate how AI's natural language processing can flag at-risk individuals earlier than traditional methods, enabling timely are also companies such as Samurai Labs and Sentinet that have developed AI-driven systems that monitor social media content and flag posts that insinuate suicidal ideation. For example, Samurai Labs 'One Life' project scans online conversations to detect signs that indicate high suicide risk. Upon detecting these indicators, the platform then leads the user to support resources or emergency assistance. In the same manner, Sentient's algorithms analyze thousands of posts on a daily basis, triggering alerts when users express some form of emotional distress, allowing for timely AI isn't a replacement for human empathy or professional mental health care, it offers a promising advancement in suicide prevention. By identifying warning signs at a much faster and more precise rate than human diagnosis and enabling early interventions, AI tools can serve as valuable allies in this fight against suicide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store