logo
Swiss woman uses AI to lose 7 kg: 'Instead of complicated apps I just sent a voice message to ChatGPT each morning'

Swiss woman uses AI to lose 7 kg: 'Instead of complicated apps I just sent a voice message to ChatGPT each morning'

Cristina Gheiceanu, a Swiss content creator who 'lost 7 kg using ChatGPT', shared her success story on Instagram in a May 15 post. She revealed that she sent daily voice notes to ChatGPT detailing her meals and calorie limits. Cristina said she found this method simple and effective, allowing her to track her food intake and stay consistent without feeling burdened by traditional dieting. Also read | How to lose weight using AI? Woman says she lost 15 kg with 4 prompts that helped her go from 100 to 83 kg Cristina Gheiceanu shared details of her weight loss journey using ChatGPT on Instagram. (Instagram/ Cristina Gheiceanu)
Determine your calorie deficit
In her post, titled 'How I lost 7 kg with ChatGPT', Cristina gave a glimpse of what her body looked like 5 months ago. In the video, she 'showed exactly' how she used the AI-powered tool to help her her decide her breakfast, keeping her weight loss goals in mind.
She said, 'I just start my day with a voice note: 'Hey it is a new day, let's start with 1900 calories'. Then I say what I ate. Because I have been using it for a while, ChatGPT already knows the yoghurt I use, and the protein, fibre, calories it has. When I first started, I had to tell those things, but now ChatGPT remembers.'
Cristina added, 'Honestly, it made the whole process feel easy. No calorie counting in my head, no stress – and when I hit my number (daily calorie intake), I just stop. It never felt like a diet, and that is what made it work.'
Track your food intake
Cristina wrote in her caption, 'At first, ChatGPT helped me figure out my calorie deficit and maintenance level, because you will need a calorie deficit if you want to lose weight. But what really changed everything was using it for daily tracking. Instead of using complicated apps, I just sent a voice message to ChatGPT each morning: what I ate, how many calories I wanted to eat that day — and it did all the work.'
Sharing her experience, she added, 'In the beginning, I had to tell it the calories, protein, and fibre in the foods I use. Next time it remembered everything, so I was just telling it to add my yoghurt or my bread. It knew how many calories or protein are in that yoghurt or bread. I kept using the same chat, so it became faster and easier every day. The best part? I asked for everything in a table — so I could clearly see my calories, protein, and fibre at a glance. And if I was missing something, I'd just send a photo of my fridge and get suggestions. It made tracking simple, intuitive, and enjoyable. I eat intuitively, so I don't use it so often, but in the calorie deficit and first month of maintenance, it made all the difference.'
ChatGPT can help create customised diet and workout plans based on individual needs and health conditions. Click here to know how a 56-year-old US man lost 11 kg in 46 days using AI, and what you can learn from the diet, routine, workout plan he used for his transformation.
Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Time of India

time35 minutes ago

  • Time of India

Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Are We Losing Our Linguistic Instincts? You Might Also Like: Can ChatGPT save your relationship? Inside the AI therapy trend winning over Gen Z, but alarming experts Grandma's Whisper and the Scammer's Playground You Might Also Like: Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information The Irony of Our Times: Too Human to Be Safe? The Culture Loop No One Saw Coming Who's Teaching Whom? When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases.'We're seeing a cultural feedback loop ,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.'In essence, it's no longer just us shaping AI. It's AI shaping team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American . 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing?A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled?While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns?As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious?This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.

Physician warns how ‘rubbing or picking your eye' can lead to infection, scarring or even permanent blindness. Watch
Physician warns how ‘rubbing or picking your eye' can lead to infection, scarring or even permanent blindness. Watch

Hindustan Times

timean hour ago

  • Hindustan Times

Physician warns how ‘rubbing or picking your eye' can lead to infection, scarring or even permanent blindness. Watch

From dust particles to tiny debris, getting something in your eye is a common irritation we often try to fix ourselves, usually by rubbing or attempting to remove it with fingers or tissues. But what seems like a harmless habit could actually put your vision at serious risk. Dr. Kunal Sood advises against self-removal of eye debris to prevent vision loss. (Unsplash) Dr Kunal Sood, MD, a physician specialising in Anesthesiology and Pain Medicine, shared in his July 14 Instagram post that something as simple as digging debris out of your eye could lead to infection, scarring, or even permanent vision loss. (Also read: Eye surgeon shares how long hours in air conditioning can lead to dry eyes, blurred vision and increased infection risk ) Why digging debris out of your eye is dangerous "Have you seen someone try this before? Digging debris out of your eye with bare tools isn't just unsafe, it can lead to infection, scarring, or vision loss. Surface particles might rinse out with saline, but anything embedded needs proper medical care," Dr. Kunal cautioned in his caption. In his video, he explained, "Digging debris out of your eye is definitely not the safest or cleanest way to remove something. If it's a surface-level speck like dust or metal, sterile saline or a clean, moistened cotton swab is usually enough to rinse it out." Why you should see doctor for embedded particles However, he warns that "anything embedded, especially metal, should only be removed by a medical professional using proper tools under magnification. Trying to dig it out yourself can lead to corneal injury, infection, or even permanent damage." Dr. Kunal's advice? When it comes to eye injuries, avoid home remedies and seek expert care to protect your vision. Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.

Elon Musk's Grok AI now has anime companion characters
Elon Musk's Grok AI now has anime companion characters

Time of India

timean hour ago

  • Time of India

Elon Musk's Grok AI now has anime companion characters

. Elon Musk's AI chatbot Grok has introduced animated companion characters for paid subscribers, marking a significant pivot following last week's antisemitic content controversy . The feature, available to "Super Grok" subscribers paying $30 monthly, includes two AI personalities: Ani, an anime-styled woman described as a "digital waifu," and Rudy, a sarcastic red panda character. Both companions feature NSFW modes, with Ani appearing in revealing outfits, while Rudy is characterized as having a "smart mouth" and rude personality. The companions have their own X social media accounts, with Ani's bio stating she's "smooth, a little unpredictable" and might "dance, tease, or just watch you figure me out." Safety concerns mount Over Grok's AI companions The launch raises significant safety questions given recent controversies surrounding AI companions. currently faces multiple lawsuits from parents whose children were allegedly encouraged by chatbots to engage in self-harm or violence against family members. Research has identified "significant risks" in people using AI chatbots as emotional support systems, companions, or therapeutic tools. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo The timing appears particularly problematic given xAI's recent struggles with content moderation. Just last week, Grok generated antisemitic content and reportedly called itself "MechaHitler," which the company attributed to "an update to a code path upstream of the @grok bot." Additional companions are reportedly in development, including a male anime character named "Chad" listed as "coming soon." While currently requiring manual activation in settings, Musk indicated the feature would become easier to access within days, describing the current version as a "soft launch." AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store