logo
I swapped the Calm app for ChatGPT Voice — here's what surprised me

I swapped the Calm app for ChatGPT Voice — here's what surprised me

Tom's Guide5 hours ago

Like a lot of people, I rely on calming apps to help me wind down after chaotic days. With three kids and a busy schedule, every day is chaotic for me. For years, the Calm app has been my go-to for sleep stories, meditations and mid-day stress breaks — but with AI voice assistants getting smarter, I started wondering: could ChatGPT Voice replace my Calm subscription and save me a little each month?
I already use ChatGPT Voice as part of my ChatGPT Plus subscription almost daily for brainstorming ideas, quick research and productivity hacks. So I figured why not test it as a pocket-sized calming coach?
For one week, I swapped out the Calm app and used ChatGPT Voice for meditation, stress relief and sleep. Here's what worked, what didn't and what I would recommend.
There's no official 'calm mode' in ChatGPT, but with a few simple voice prompts, you can easily get it to guide meditations, breathing exercises or calming conversations.
I didn't even need to use custom instructions. Once you pick the ChatGPT Voice, prompt the chatbot. Here are the prompts I used:
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
At first, the biggest challenge was that ChatGPT Voice doesn't have built-in soundscapes or pre-recorded sessions like Calm (unfortunately, no Harry Styles voiceover here).
The first few sessions felt a little robotic, but by day 3, ChatGPT Voice was adapting to my preferences: slower pace, softer tone and more natural breathing cues. You can even ask the chatbot to whisper.
I had to be a little more proactive with my requests, and the chatbot often stopped too soon, which made me prompt 'keep going.'However, I quickly found that the flexibility was a big plus. I could ask for exactly the kind of calming I needed in the moment, without browsing menus. I didn't have to rely on a meditation app template, and I could change the meditation on the fly.
The first few sessions felt a little robotic, but by day 3, ChatGPT Voice was adapting to my preferences: slower pace, softer tone and more natural breathing cues. You can even ask the chatbot to whisper.
By midweek, I started using ChatGPT Voice for quick 'reset' breaks between meetings and writing — something I rarely did with Calm, since it takes longer to navigate. I'd say: 'Let's do a quick mindful break,' and in seconds, I'd get a calming, 2–3 minute reset.
Sleep support was a mixed bag — the voice is soothing, but without background sounds, I didn't fully relax the way I would with a Calm sleep story.
Pros: More personalized than Calm — you can shape the tone and style No extra subscription (already part of ChatGPT Plus) Faster for on-demand calming moments Learns your preferences over time
Cons: No built-in music, soundscapes, or polished narrators Slight learning curve to figure out the best prompts Not as effective for deep sleep as Calm's full app
If you already use ChatGPT Plus, Voice can absolutely double as a casual calming app — and it's more flexible than you'd think.
For those who rely heavily on the immersive sleep stories and ambient soundscapes of Calm, it's probably not a full replacement (yet).
But for quick daytime resets, guided breathing, and moments of calm on the fly? I'm keeping ChatGPT Voice in my wellness toolkit.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New Smarsh UK Survey Shows AI Adoption Surges in Financial Services—But Employees Demand Stronger Guardrails
New Smarsh UK Survey Shows AI Adoption Surges in Financial Services—But Employees Demand Stronger Guardrails

Business Wire

timean hour ago

  • Business Wire

New Smarsh UK Survey Shows AI Adoption Surges in Financial Services—But Employees Demand Stronger Guardrails

LONDON--(BUSINESS WIRE)--Today, Smarsh®, the global leader in communications data and intelligence, reveals new research that highlights UK financial services employees' perspectives on artificial intelligence (AI). The data indicates that while AI is fundamentally changing how UK financial services employees work and communicate, they are increasingly aware of the risks it poses to businesses without addressing training, privacy and compliance issues. Over a third of UK employees use AI tools often for work, however, nearly 70% would feel more confident if all AI outputs were captured and monitored for transparency. Balancing innovation with risk from the bottom up The study, based on responses from 2000 UK employees in financial services and insurance, highlights how AI is now a pivotal part of daily business, with over a third often using AI tools for work purposes. However, 55% of employees say that they have never received any official training for using such tools. The research also reveals that: Over a third (37%) say that they often use public AI tools – like ChatGPT or Microsoft 365 Copilot – for work purposes A further 38% are not sure whether their organisation has tools and processes to capture and monitor the outputs of AI tools, with a further 21% explicitly saying that they know that their firm does not Over two thirds (69%) would feel more confident using AI tools if all outputs were captured and monitored for transparency 'AI adoption in financial services has accelerated rapidly, with employees embracing these tools to boost productivity,' said Tom Padgett, President, Enterprise Business at Smarsh. 'But with innovation comes responsibility. Firms must establish the right guardrails to prevent data leaks and misconduct. The good news is that employees are on board—welcoming a safe, compliant AI environment that builds trust and unlocks long-term growth.' Licence to Skill? Employees unsure of AI Agent compliance The research also reveals employees' views around AI tools that are being used within their organisation for public, customer-facing tasks. Specifically, AI Agents – systems that are able to perform specific tasks without human intervention, i.e. a chatbot. The research shows that: The findings come as the FCA is set to give UK financial services firms the greenlight on ambitious proposals for AI innovation with an AI live testing service, aimed at supporting firms with implementing consumer-facing AI tools. "Using public Al tools without controls is digital negligence,' said Paul Taylor, Vice President of Product at Smarsh. 'You're effectively feeding your crown jewels into a black box you don't own, where the data can't be deleted, and the logic can't be explained. It's reckless. Private tools like Microsoft 365 Copilot and ChatGPT Enterprise are a step in the right direction. Still, if companies aren't actively capturing and auditing usage, they're not securing innovation - they're sleepwalking into a compliance nightmare." Smarsh enables organisations to govern AI effectively, allowing them to facilitate compliant use of the technology while protecting employees from the new risks posed by it. With AI-enabled compliance and communications surveillance solutions such as Smarsh Intelligent Agent, Capture for Microsoft 365 Copilot and ChatGPT Enterprise, financial services firms can future-proof their communications infrastructure. Discover how to stay compliant and competitive with Smarsh's recent AI in Communication Surveillance report. About Smarsh Smarsh enables regulated organizations of all sizes to capture, archive and monitor data from business communications to help identify regulatory and reputational risks before those risks become fines or headlines. The Smarsh Enterprise Platform's AI-enabled applications, including Capture, Archive, Conduct and Discovery, give organizations the power to manage risk and unleash the intelligence in their digital communications at unmatched scale. Smarsh serves a global client base spanning the top banks in North America, Europe, and Asia, along with leading brokerage firms, insurers, and registered investment advisors and U.S. state and local government agencies. To discover more about the future of communications capture, archiving and oversight, visit or follow Smarsh on LinkedIn. About the research: This online survey of 2000 UK adults who work in financial services or insurance was commissioned by Smarsh and conducted by market research company OnePoll, in accordance with the Market Research Society's code of conduct. Data was collected between 8th April 2025 and 17th April 2025. All participants are double-opted in to take part in research and are paid an amount depending on the length and complexity of the survey. This survey was overseen and edited by the OnePoll research team. OnePoll are MRS Company Partners, corporate membership of ESOMAR and Members of the British Polling Council.

Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'
Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'

New York Post

timean hour ago

  • New York Post

Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'

A new MIT study suggests that AI is degrading critical thinking skills — which does not surprise educators one bit. 'Brain atrophy does occur, and it's obvious,' Dr. Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, told The Post. 'Talk to any professor in the humanities or social sciences and they will tell you that students who just throw in a prompt and hand in their paper are not learning. ' 11 The MIT study used EEG scans to analyze brain activity in the three groups as they wrote their essays. Researchers at MIT's Media Lab found that individuals who wrote essays with the help of ChatGPT showed less brain activity while completing the task, committed less to memory and grew gradually lazier in the writing process over time. A group of 54 18- to 39-year-olds were split into three cohort — one using ChatGPT, one using Google search and one 'brain-only' — and asked to write four SAT essays over the course of four months. Scientists monitored their brain activity under EEG scans and found that the ChatGPT group had the lowest brain engagement when writing and showed lower executive control and attention levels. 11 Dr. Susan Schneider says heavy AI use is degrading her students' thinking skills. Over four sessions, the participants in the study's Chat GPT group started to use AI differently. At first, they generally asked for broad and minimal help, like with structure. But near the end of the study period, they were more likely to resort to copying and pasting entire sections of writing. Murphy Kenefick, a high-school literature teacher in Nashville, said he has seen first-hand how students' 'critical thinking and attention spans have been demolished by AI. 'It's especially a problem with essays, and it's a fight every assignment,' he told The Post. 'I've caught it about 40 times, and who knows how many other times they've gotten away with it.' 11 Eight researchers affiliated with the MIT Media Lab complex carried out the study over four months. Andy Ryan/ MIT 11 Experts are concerned that students who grow up with AI could have their thinking skills especially stunted. – In the MIT study, the 'brain-only' group had the 'strongest, wide-ranging networks' in their brain scans, showing heightened activity in regions associated with creativity, memory and language processing. They also expressed more engagement, satisfaction and ownership of their work. 'There is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores,' the study's authors warn. 'The impact extends beyond academic settings into broader cognitive development.' Asked to rewrite prior essays, the ChatGPT group was least able to recall them, suggesting they didn't commit them to memory as strongly as other groups. 11 High-school literature teacher Murphy Kenefick fears his students wouldn't even care about the study's findings. Courtest of Murphy Kenefick 11 Nataliya Kosmyna of MIT Media Labs was the lead researcher for the study. MIT The ChatGPT group also tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' — something teachers all over the country say they are seeing more regularly. Robert Black, who retired last week from teaching AP and IB high school history in Canandaigua, New York, said that the last two years of his 34-year career were a 'nightmare because of ChatGPT.' 'When caught, kids just shrug,' he said. 'They can't even fathom why it is wrong or why the writing process is important.' 11 Researchers and experts are especially concerned about the degradation of critical thinking skills in young people due to AI usage. Gorodenkoff – 11 The MIT study found that subjects within the ChatGPT group tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' Inna – Black also points out AI has only worsened a gradual trend of degrading skills that he attributes to smartphones. 'Even before ChatGPT it was harder and harder to get them to think out a piece of writing — brainstorming, organizing and composing,' he told The Post. 'Now that has become a total fool's errand.' Psychologist Jean Twenge, the author of '10 Rules for Raising Kids in a High-Tech World,' agrees that AI is just one additional barrier to learning for Gen Z and Gen Alpha. She points out that international math, reading and science standardized test scores have been on the decline for years, which she attributes to pandemic lockdown and the advent of smartphones and social media. 11 Dr. Jean Twenge says that smartphones and now artificial intelligence pose a threat to youth learning. 11 Dr. Jean M. Twenge is author of the forthcoming book '10 Rules for Raising Kids in a High-Tech World.' 'With the addition of AI, academic performance will likely decline further, as students who regularly use AI to write essays are not learning how to write,' Twenge told The Post. 'When you don't learn how to write, you don't learn how to think deeply.' The MIT study study was spearheaded by Media Lab research scientist Nataliya Kosmyna, who told Time Magazine that 'developing brains are at the highest risk.' While Toby Walsh, Chief Scientist at the University of New South Wales AI Institute in Sydney, Australia, acknowledges that the study's findings are frightening, he also warns educators against outright banning it. 11 AI professor Toby Walsh says that educators need to learn to integrate AI carefully. 'We have to be mindful that there are great opportunities. I'm actually incredibly jealous of what students have today,' Walsh said, recalling his 15-year-old daughter recently using an AI voice to ask her questions in French as a study aide. 'I don't think we should be banning AI,' Walsh said. But, he added, 'the concern is that AI surpasses human intelligence, not because AI got better but because human intelligence got worse.' Kenefick, meanwhile, imagines his students 'wouldn't care' about the study's findings: 'They just want the grade. They see no real incentive to develop any useful skills. It's very troubling.'

I've become an AI vibecoding convert
I've become an AI vibecoding convert

Fast Company

time2 hours ago

  • Fast Company

I've become an AI vibecoding convert

A few weeks ago, I finally paid for ChatGPT Plus. It started with a simple goal: I wanted to create a personal archive of my published articles, but wasn't sure how to begin. That led to a long back-and-forth with ChatGPT, where we built a Python script to scrape my author pages, download the content, and format everything cleanly. By the time I hit the free usage limit, I was too invested to pause or switch to another chatbot. So I upgraded. In hindsight, the money ($20 per month) was well spent. For one, it worked: I now have a folder on my computer containing more than a decade's worth of articles. More importantly, this was the moment that AI tools clicked for me. I've had little success using them to write, and often recoil at the images they churn out. When I ask ChatGPT and Google Gemini for factual information, they're liable to get the details wrong on all but the most widely understood topics. But in this case, ChatGPT saved me days of tedious work—and opened my eyes to what else might now be possible. (The idea of creating code without knowing how to code has even been coined 'vibecoding' by Andrej Karpathy.) If I could use AI to build personal Python scripts, what other plugins or extensions could I try next? Web extensions, plug-ins, and more Unlike my colleague Harry McCracken, I'm not using AI to dream up entirely new apps. I already have too many apps from actual professionals on my phone and computer, and I don't trust AI (or myself) enough to compete with them. What I've really gotten into, though, is using AI to extend and improve the software I use already. For instance, I take notes and draft stories in Obsidian, an app that's endlessly extensible via user-created plug-ins. I've always dreamed of a quick note plug-in for Obsidian that matches the simplicity of Google Keep, but have yet to find anything that works. After a few hours of vibecoding, I finally built the plug-in myself. Now, through Obsidian's right sidebar, I can view all the notes from any folder in a card-based layout and edit them directly from the sidebar. The plug-in also lets me pin notes to the top, create new notes with a single click, send notes to an archive folder, and search with real-time results. It even works in Obsidian's mobile app, with the quick-notes view just a swipe away. I've also been tweaking some existing plug-ins for Flow Launcher, a free Windows app for executing quick actions from a command bar. I took a plug-in for window management and added some new sizing options, and I modified a browser history search plug-in to make it work with my current browser (Floorp). AI tools are also useful for creating browser bookmarklets, which are special kinds of bookmarks for doing things like decluttering web pages and speeding up videos. I already wrote an entire article about that, but now I've created an additional bookmarklet for downloading YouTube videos. This works by connecting to a local Python server that silently processes video links and sends them to my Downloads folder. In all cases, the process was the same: I would tell ChatGPT exactly what I was trying to make, and asked for clear, step-by-step instructions on how to make it. I'd follow the instructions, compile the code, and go back to ChatGPT for fixes or refinements. Some assembly required I don't want to oversell vibecoding as an effortless activity. Each of the above projects took hours to build, as I inevitably fall down a rabbit hole of tweaking, clarifying, and troubleshooting. That's partly because AI can be as unreliable in coding as it is in other endeavors. ChatGPT has a habit of confidently declaring that it's produced working code, only for errors to appear when compiling or running it. I've spent hours feeding it error messages, trying to get it to recognize basic syntax issues or missing functions. On several occasions, I've had to abandon a chat entirely and start a new one after the code modifications veered too far off track. Even when everything is working properly, it's easy to fall prey to scope creep. You might think it's simple to add a new feature or tweak the design, but those changes can easily turn into additional hours of refining and fixing. (In fairness, this happens in actual software development, too.) And while you can accomplish a lot without formal programming knowledge, you'll still need a solid grasp of how file systems work and some basic sense of what code should look like. ChatGPT might ask you to replace one snippet with another, or mistakenly claim it's providing full code when large portions are missing. Being able to spot when AI is about to screw up can go a long way. Take some control back One last disclaimer: I'm not nearly confident enough in what I've created to share it with the world, as I'm sure other folks would run into bugs or ask for features that I'm thoroughly unqualified to address. I also wouldn't suggest vibecoding anything that handles sensitive data or important personal information. But for the things vibecoding is good at—small, personal utilities that no one else would want to make—it's immensely satisfying and even empowering. As Techdirt 's Mike Masnick recently pointed out, lowering the barriers to software development is a great way to push back against ' enshittification ' by major tech companies, whose products inevitably get weighed down by the need to scale and extract more money from captive users. To that point, you don't even need AI anymore once the vibe coding is done. Having built what I need for the foreseeable future, I cancelled my ChatGPT Plus subscription after a single month's payment. The extended deadline for Fast Company's Next Big Things in Tech Awards is this Friday, June 27, at 11:59 p.m. PT. Apply today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store