logo
GPT-5's Voice Mode Can Hold a Decent Conversation, but Please Don't Talk to ChatGPT in Public

GPT-5's Voice Mode Can Hold a Decent Conversation, but Please Don't Talk to ChatGPT in Public

CNET2 days ago
Sitting in the lobby of the auto body shop waiting for a repair estimate, I realized I'd forgotten my earbuds. Normally, that's not a major issue, but I was talking to my phone. And I wasn't talking to another person. I was talking to ChatGPT. It felt as embarrassing as asking Siri a question from across the room or joining a Zoom meeting sans headphones in an open office.
I'm testing the advanced voice mode that comes with GPT-5, OpenAI's latest version of the generative AI model behind ChatGPT. GPT-5 dropped this summer after many months of speculation and delays, promising AI users a faster and smarter chatbot experience. The jury's still out on whether or not OpenAI has delivered. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
GPT-5 includes improvements to its advanced voice mode, which is essentially a way for you to literally talk to ChatGPT and have it respond in the voice of your choosing. Free users like me now have access to the advanced version (free users previously only had access to basic voice mode), and paying subscribers will receive higher usage limits. Another new GPT-5 feature allows you to choose what kind of personality you want your AI to mimic, including sassy, nerdy and robotic avatars.
To use voice mode, open ChatGPT, tap the audio button next to the prompt window where you would enter an instruction and begin chatting. You can change which voice ChatGPT uses by tapping the settings icon in the upper right hand corner on the mobile app (two bars stacked on top of each other with circles on them).
More human AI voices? How my experience went
I decided to try to speak to ChatGPT like I would a friend, like a more enthusiastic version of myself. The AI laughed when I started the call with a spirited "Heyyyy girlfriend!" which felt both funny and condescending.
ChatGPT's voice flowed very naturally in a familiar cadence, similar to the way I would talk to a particularly friendly customer service agent. That made sense as the chatbot itself told me that the upgraded advanced voice mode helped make it sound more human.
The voice I used, ember, would often take pauses for breaths, like a human would during a longer sentence. I thought that was kind of weird, since while ChatGPT was doing its best impression of a human, we both knew it didn't actually need to pause to catch its breath.
In my conversation with ChatGPT, it was more empathetic than I expected. It asked me how I was doing, and I said not well and told it about my car accident. In our five-minute chat, it would bookend many of its responses with empathetic statements, like saying it was sorry I was having a bad week and agreeing that dealing with insurance can be a headache. (Has ChatGPT ever had to call an insurance agent or even experienced a headache? I think not).
While a sympathetic robot ear might not seem like a big deal, it can be a sign of a bigger problem. Sycophantic AI, the term used to describe when AI is overly affectionate or emotional, can be frustrating for users just looking for information. It can also be dangerous for people who use AI as therapists or mental health counselors, something OpenAI CEO Sam Altman has warned ChatGPT users against. Previous versions of ChatGPT have been pulled and re-released after issues with sycophantic tendencies.
I also asked ChatGPT more factual questions, like the average cost of car repair labor in North Carolina and where I could go to get a second repair estimate. It responded more like a friend would than a chatbot, which may not be the most helpful. For example, when I typed the same request into ChatGPT on my laptop, it pulled up a map with the list of stores, along with more information like pricing info and store hours. But when I was chatting with ChatGPT voice mode, it brought up fewer options and described them based on what I assume are the shop's marketing language and customer reviews, using phrases like "They've been around for quite a while" and saying that one shop is "known for quality service". You also don't get any links or sources with voice mode, which I don't love.
ChatGPT automatically transcribes voice chats, so you can see the difference in the level of detail given in regular text prompts (left) and voice chats (right).
Screenshot by Katelyn Chedraoui/CNET
Using ChatGPT voice as a sounding board
One of the things voice mode is well-suited for is being a brainstorming partner, a literal wall to bounce ideas off of. I asked it to help me plan a sky-diving-themed birthday party, and it both helped me develop new ideas and refine the ones I already had.
I interrupted ChatGPT while it was speaking a couple of times, and it was able to pivot quickly. I also tend to talk quickly, and the chatbot kept up and didn't miss any of my thoughts. I let myself ramble and steer the conversation off track, and ChatGPT didn't blink a virtual eye. Most importantly, when I asked it a question about an earlier topic, it could pick up where we left off. Improvements to ChatGPT's memory are to thank for that important consideration.
Now Playing: The Hidden Impact of the AI Data Center Boom
05:13
Should you use ChatGPT voice mode?
Overall, I think voice mode is nice as another way to use ChatGPT, but it's only situationally useful. If you need in-depth research and more detailed information, voice mode isn't going to be right for you. But if you just want to talk to someone (rather, something) or work through a problem out loud, voice mode is a nice alternative to having to articulate your thoughts and type them out.
I still believe that we haven't normalized talking to AIs in public spaces, especially without headphones. But it can be a useful alternative for people who think better aloud. For more, check out how AI is changing search engines and the best AI image generators.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

4 Things Schools Need To Consider When Designing AI Policies
4 Things Schools Need To Consider When Designing AI Policies

Forbes

time11 minutes ago

  • Forbes

4 Things Schools Need To Consider When Designing AI Policies

Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Business Insider

time11 minutes ago

  • Business Insider

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed " AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges.

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.
I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Yahoo

time26 minutes ago

  • Yahoo

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Jigar Bhati transitioned from Twitter to OpenAI for a startup-like environment. He said his work experience in infrastructure and scaling helped him get hired at OpenAI. Bhati said OpenAI still feels like a startup with a focus on productivity and collaboration. This is an as-told-to essay based on a conversation with Jigar Bhati, a member of technical staff at OpenAI. He's worked at the company since 2023 and previously worked as a software engineer at Twitter. This story has been edited for length and clarity. I was working at Twitter for almost seven years and was looking for a change in general. Twitter was a big tech company, and it was handling millions of users already. I wanted to join a company that was a bit smaller scale and work in a startup-like environment. When ChatGPT launched, software engineers were in awe. So I was already using it before joining OpenAI. I was fascinated by the product itself. There was definitely that connection with the product that, as a software engineer, you often don't feel. I could also see there was a huge opportunity and huge impact to be created as part of joining OpenAI in general. And I think if I look back at the past two years of my journey, I think that's mostly true. It's been exciting seeing the user base grow over two years. I got to work and lead some critical projects, which was really great for my own professional development. And you're always working with smart people. OpenAI hires probably the best talent out there. You're contributing while also learning from different people, so there's exponential growth in your professional career as well. Building up the right work experience is key My career at Twitter and before that had been focused on infrastructure capabilities and handling large-scale distributed systems, which was something the team was looking for when I interviewed with OpenAI. When I joined OpenAI, there were a lot of challenges with respect to scaling infrastructure. So my expertise has helped steer some of those challenges in the right direction. Having that relevant experience helps when you're interviewing for any team. So one of the best things is to find the right team that has the closest match for you. Having public references, for me, helped. I had some discussions and conference talks and met a lot of people as part of that conference networking. That also helps you stay on top of state-of-the-art technological advancements, which helps in whatever work stream you're working with. I think OpenAI is probably one of the fastest-growing companies in the world, so you really need to move fast as part of that environment. You're shipping, or delivering, daily or weekly. And as part of that, I think you really need to have that cultural fit with the company. When you're interviewing, I think you need to be passionate about working at OpenAI and solving hard problems. In general for professionals, it's important to have a good career trajectory so that they can showcase how they have solved real, hard problems at a large scale. I think that goes a long way. Starting your career at a startup or a larger tech company are both fine. It's up to the students' interests and cultural fit as well. OpenAI is not just a place for experienced engineers. OpenAI also hires new grads and interns. I've definitely seen people entering as part of that pipeline. And it's amazing to see how they enjoy working at OpenAI and how they come up with new ideas and new capabilities and new suggestions. But whichever place you end up in, I think it's important to have good growth aspects professionally and also for you to ship products. At any company you can create impact for the company and yourself, and be able to have that career trajectory. OpenAI still feels like a startup One of the most exciting things is that I think OpenAI still operates like it operated two years ago. It still feels like a startup, even though we may have scaled the number of engineers, number of products, the number of user bases. It's still very much like a startup environment and there's a big push for productivity and collaboration. The velocity and the productivity you get working at OpenAI is definitely much higher than some of the other companies that I've worked with. That makes things really exciting because you get to ship products on a continuous basis, and you get into that habit of shipping daily rather than weekly or monthly or yearly. It feels great to be working in AI right now. In addition to having a connection with the product, it makes things very interesting when you are able to work from within the company, shaping the direction of a product that will be used by the entire world. With GPT-5, for instance, we had early access to it and could steer the direction of its launch. Every engineer is empowered to provide feedback that will help improve the model, add new capabilities, and make it more usable and user-friendly for all the different 700 million users out there. I think that's pretty amazing and speaks to the kind of impact you can make as part of the company. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store