Latest news with #AIConversation


The Guardian
a day ago
- Science
- The Guardian
New research centre to explore how AI can help humans ‘speak' with pets
If your cat's sulking, your dog's whining or your rabbit's doing that strange thing with its paws again, you will recognise that familiar pang of guilt shared by most other pet owners. But for those who wish they knew just what was going on in the minds of their loyal companions, help may soon be at hand – thanks to the establishment of first scientific institution dedicated to empirically investigating the consciousness of animals. The Jeremy Coller Centre for Animal Sentience, based at the London School of Economics and Political Science (LSE), will begin its work on 30 September, researching non-human animals, including those as evolutionarily distant from us as insects, crabs and cuttlefish. Harnessing a wide range of interdisciplinary global expertise, the £4m centre's work will span neuroscience, philosophy, veterinary science, law, evolutionary biology, comparative psychology, behavioural science, computer science, economics and artificial intelligence. One of its most eye-catching projects will be to explore how AI can help humans 'speak' with their pets, the dangers of it going wrong – and what we need to do to prevent that happening. 'We like our pets to display human characteristics and with the advent of AI, the ways in which your pet will be able to speak to you is going to be taken to a whole new level,' said Prof Jonathan Birch, the inaugural director of the centre. 'But AI often generates made-up responses that please the user rather than being anchored in objective reality. This could be a disaster if applied to pets' welfare,' said Birch, whose input to the Animal Welfare (Sentience) Act led to it being expanded to include cephalopod mollusks and decapod crustaceans. Birch points to separation anxiety: dog owners often want reassurance that their pet is not suffering when left alone for long periods. Futuristic 'translation' apps based on large language models could promise to provide that reassurance, but end up causing harm by telling owners what they want to hear rather than what the animal actually needs. 'We urgently need frameworks governing responsible, ethical AI use in relation to animals,' said Birch. 'At the moment, there's a total lack of regulation in this sphere. The centre wants to develop ethical guidelines that will be recognised globally.' Birch also points to the lack of regulation around animals and driverless cars: 'We have a lot of debate around them not hitting people but we don't talk about them also avoiding cats and dogs.' AI and farming was another urgent issue for the centre. 'Farming is already embracing automation in a huge way and that's going to increase at pace,' Birch said. 'But it is happening without much scrutiny or discussion, which raises huge ethical questions about what the limits are: should farming involve caring relationships with animals? If so, the current direction is not the way in which we want farming to go.' The centre will work with non-governmental organisations to develop guidance, research and codes of practice that can be lobbied for around the world. Jeff Sebo, the director of the Center for Environmental and Animal Protection, at New York University, said issues of animal sentience and welfare, the effects of AI on animals, and public attitudes towards animals were 'among the most important, difficult and neglected issues that we face as a society'. 'Humans share the world with millions of species and quintillions of individual animals, and we affect animals all over the world whether we like it or not,' he said. Prof Kristin Andrews, one of the new centre's trustees, said she believed it could answer what she regards as the biggest question in science: what is human consciousness – and how can it be switched back 'on' in cases of stroke and other medical emergencies? 'We still don't understand what makes humans conscious, or why anyone starts or stops being conscious,' she said. 'But we do know that the way to get answers is to study simple systems first: science has made great strides in genomics and in medicine by studying simple organisms.' Dr Kristof Dhont, another trustee, said he was fascinated by human attitudes towards animal sentience. 'One of the most pressing behavioural challenges of our time is how to close the gap between what people believe about animals and how they actually behave towards them,' he said. 'Most people care deeply about animals but there are all these systems, habits, norms and economic profits that get in the way of translating that into the way we treat animals. 'I want to use behavioural science to understand, for example, why there's resistance to eating cultivated meat even though we all agree that it would save creatures who feel pain from being killed.' Jeremy Coller, whose foundation made the multiyear commitment to the centre, said his aim was to change attitudes in our 'speciesist species'. 'Only when we have a better understanding of how other animals feel and communicate will we be able to acknowledge our own shortcomings in how we treat them,' he said. 'Just as the Rosetta Stone unlocked the secrets of hieroglyphics, I am convinced the power of AI can help us unlock our understanding of how other animals experience their interactions with humans.'


CNET
17-06-2025
- CNET
ChatGPT's Voice Feature Makes It Feel More Human Than Ever
After years of dealing with voice assistants that constantly misheard me or cut me off mid-sentence, I didn't expect much when I tapped the little wavelength icon to try ChatGPT's Voice Mode. I figured it would be another feature that sounded cool but fell flat in real use. But this one surprised me. Voice Mode doesn't just spit out answers. It actually feels like a real conversation. It picks up on pauses, mumbled thoughts, and even those filler words like "uhhh" without breaking the flow. Whether I'm driving, cooking, or just trying to multitask, I can speak naturally and get helpful answers without picking up my phone. It's not just faster than typing-it feels easier, more intuitive, and way more efficient. If you haven't given it a shot yet, here's why ChatGPT's Voice Mode could become your favorite way to use AI. Don't miss: What Is ChatGPT? Everything You Need to Know About the AI Chatbot ChatGPT, from OpenAI, isn't the only chatbot going hands-free. Google's Gemini Live offers the same "talk over me, and I'll keep up" vibe. Anthropic's Claude has a beta version of its voice mode on its mobile apps, complete with on-screen bullet points as it speaks, and Perplexity's iOS and Android assistant also answers spoken questions and launches apps like OpenTable or Uber on command. But even with everyone racing to master real-time AI conversation, ChatGPT remains my go-to. Whatever your chatbot of choice, take a break from the typing and try out the voice option. It's far more useful than you think. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Watch this: ChatGPT's Viral Feature: Turning People Into Action Figures 01:19 What exactly is voice mode? Voice chat (or "voice conversations") is ChatGPT's hands-free mode that lets you talk to the AI model and hear it talk back to you, no typing required. There's a voice icon that you'll find in the mobile, desktop and web app on the bottom-right of any conversation you're in. If you press the button, you can say your question aloud and ChatGPT will transcribe it, reason over it and reply. As soon as it's done talking, it starts listening again, creating a natural back-and-forth dialogue. Just remember: Voice mode runs on the same large language model as regular ChatGPT, so it can still hallucinate or get facts wrong. You should always double-check anything important. OpenAI offers two versions of these voice conversations: Standard Voice (the default, lightweight option for free) and Advanced Voice (only available for paid users). Standard Voice first converts your speech to text and processes it with GPT-4o (and GPT-4o mini), taking a little bit longer to talk back to you. Advanced Voice, on the other hand, uses natively multimodal models, meaning it "hears" you and generates audio, so the conversation is more natural and done in real time. It can pick up on cues other than the words themselves, like the speed you're talking or the emotion in your voice, and adjust to this. Note: Free users can access a daily preview of Advanced Voice. awe Nelson Aguilar/CNET 7 reasons you should start using ChatGPT's voice mode feature 1. It's genuinely conversational Unlike typing, when I talk to ChatGPT, I'm not hunting for the right word or backspacing after every typo. I'm just speaking, like I would with any friend or family member, filled with "ummmmms" and "likes" and other awkward breaks. Voice mode rolls with all of my half-finished thoughts, though, and responds with either a fully fleshed-out answer or a question to help me hone in on what I need. This effortless give-and-take feels much more natural than typing. 2. You can use ChatGPT hands-free Obviously, I still need to open the ChatGPT app and tap on the voice mode button to start, but once I begin, I no longer have to use my hands to continue a conversation with the AI chatbot. I can be stuck in traffic and brainstorm a vacation that I want to take later this year. I can ask about flights, hotels, landmarks, restaurants and anything else, without touching my phone, and that conversation is saved within the app, so that I don't have to remember everything that ChatGPT tells me. 3. It's good for learning a new language with real-time translation I mentioned earlier that I use voice mode to practice languages, which voice mode excels in. I can speak in English and have ChatGPT respond in flawless Polish, complete with pronunciation tips. Just ask voice mode, "Can you help me practice my (language)" and it'll respond with a few ways it can help you, like conversation starters, basic vocabulary or numbers. And it remembers where you left off, so you can, in a way, take lessons; no Duolingo needed. 4. Get answers about things you see in the real world This feature is exclusive to Advanced Voice, but this is probably my favorite feature with voice mode. Thanks to its multimodal superpowers, I can turn on my phone's camera or take a video/photo and ask ChatGPT to help me. For example, I had trouble recognizing a painting I found at a thrift store, and the owner had no idea where it came from. I pulled up voice chat, turned on my camera and asked voice mode where the painting was from. In seconds, it could tell me the title of the painting, the artist's name and when it was painted. 5. It's a better option for people with certain disabilities For anyone with low vision or dyslexia, talking for sure beats typing. Voice mode can transcribe your speech and then read your answer aloud at whatever pace you choose (you can adjust this in your settings or ask ChatGPT to slow down). The hands-free option also helps anyone with motor-skill challenges, because all you need to do is one-tap to start and another to stop, without extensive typing on a keyboard. 6. Faster brainstorming Sometimes I get a burst of ideas, and I think faster than I can type, so ChatGPT's voice mode is perfect for spitballing story ideas, figuring out a new layout for my living room or deciding interesting meals to cook for the week. Because I'm thinking aloud instead of staring at my phone, my ideas flow much easier and faster, especially with ChatGPT's instant follow-ups. It helps keep the momentum rolling until I've got a polished idea for whatever I'm brainstorming. 7. Instant summaries you can listen to Drop a 90-page PDF in the chat, like for a movie script or textbook, ask for a summarization and have the AI read it aloud to you while you fold laundry. It's like turning any document (I even do Wikipedia pages) into a podcast -- on demand. Voice mode isn't just a neat trick; it's a quick and more natural way to use ChatGPT. Whether you're translating street signs, brainstorming an idea or catching up on the news aloud, talking to ChatGPT feels less like using a chatbot and more like having a conversation with a bite-sized expert. Once you get used to thinking out loud, you might never go back to your keyboard.


CNET
05-06-2025
- Entertainment
- CNET
I Switched to ChatGPT's Voice Mode. Here Are 7 Reasons Why It's Better Than Typing
I really didn't expect much the first time I tapped the tiny wavelength icon to try ChatGPT's Voice Mode. I figured it was just another AI gimmick. After all, I've been disappointed by voice assistants before -- but this isn't Siri. Don't miss: What Is ChatGPT? Everything You Need to Know About the AI Chatbot Voice Mode slips effortlessly into the give-and-take of a real human conversation, catching my pauses, half-finished thoughts and throw-away "ums." I can figure out what I'm making for dinner while inching through LA traffic or brush up on my Polish while wiping down counters in my apartment. All without breaking the conversational flow or ever reaching for my keyboard. ChatGPT isn't the only chatbot going hands-free. Google's Gemini Live offers the same "talk over me, and I'll keep up" vibe. Anthropic's Claude has a beta version of its voice mode on its mobile apps, complete with on-screen bullet points as it speaks, and Perplexity's iOS and Android assistant also answers spoken questions and launches apps like OpenTable or Uber on command. But even with everyone racing to master real-time AI conversation, ChatGPT remains my go-to. Whatever your chatbot of choice, take a break from the typing and try out the voice option. It's far more useful than you think. Now Playing: AI Atlas Directory Hero 00:19 What exactly is Voice Mode? Voice chat (or "voice conversations") is ChatGPT's hands-free mode that lets you talk to the AI model and hear it talk back to you, no typing required. There's a voice icon that you'll find in the mobile, desktop and web app on the bottom-right of any conversation you're in. If you press the button, you can say your question aloud and ChatGPT will transcribe it, reason over it and reply. As soon as it's done talking, it starts listening again, creating a natural back-and-forth dialogue. Just remember: Voice Mode runs on the same large language model as regular ChatGPT, so it can still hallucinate or get facts wrong. You should always double-check anything important. OpenAI offers two versions of these voice conversations: Standard Voice (the default, lightweight option for free) and Advanced Voice (only available for paid users). Standard Voice first converts your speech to text and processes it with GPT-4o (and GPT-4o mini), taking a little bit longer to talk back to you. Advanced Voice, on the other hand, uses natively multimodal models, meaning it "hears" you and generates audio, so the conversation is more natural and done in real time. It can pick up on cues other than the words themselves, like the speed you're talking or the emotion in your voice, and adjust to this. Note: Free users can access a daily preview of Advanced Voice. awe Nelson Aguilar/CNET 7 reasons you should start using ChatGPT's Voice Mode feature 1. It's genuinely conversational Unlike typing, when I talk to ChatGPT, I'm not hunting for the right word or backspacing after every typo. I'm just speaking, like I would with any friend or family member, filled with "ummmmms" and "likes" and other awkward breaks. Voice Mode rolls with all of my half-finished thoughts, though, and responds with either a fully fleshed-out answer or a question to help me hone in on what I need. This effortless give-and-take feels much more natural than typing. 2. You can use ChatGPT hands-free Obviously, I still need to open the ChatGPT app and tap on the Voice Mode button to start, but once I begin, I no longer have to use my hands to continue a conversation with the AI chatbot. I can be stuck in traffic and brainstorm a vacation that I want to take later this year. I can ask about flights, hotels, landmarks, restaurants and anything else, without touching my phone, and that conversation is saved within the app, so that I don't have to remember everything that ChatGPT tells me. 3. It's good for learning a new language with real-time translation I mentioned earlier that I use Voice Mode to practice languages, which Voice Mode excels in. I can speak in English and have ChatGPT respond in flawless Polish, complete with pronunciation tips. Just ask Voice Mode, "Can you help me practice my (language)" and it'll respond with a few ways it can help you, like conversation starters, basic vocabulary or numbers. And it remembers where you left off, so you can, in a way, take lessons; no Duolingo needed. 4. Get answers about things you see in the real world This feature is exclusive to Advanced Voice, but this is probably my favorite feature with Voice Mode. Thanks to its multimodal superpowers, I can turn on my phone's camera or take a video/photo and ask ChatGPT to help me. For example, I had trouble recognizing a painting I found at a thrift store, and the owner had no idea where it came from. I pulled up voice chat, turned on my camera and asked Voice Mode where the painting was from. In seconds, it could tell me the title of the painting, the artist's name and when it was painted. 5. It's a better option for people with certain disabilities For anyone with low vision or dyslexia, talking for sure beats typing. Voice Mode can transcribe your speech and then read your answer aloud at whatever pace you choose (you can adjust this in your settings or ask ChatGPT to slow down). The hands-free option also helps anyone with motor-skill challenges, because all you need to do is one-tap to start and another to stop, without extensive typing on a keyboard. 6. Faster brainstorming Sometimes I get a burst of ideas, and I think faster than I can type, so ChatGPT's Voice Mode is perfect for spitballing story ideas, figuring out a new layout for my living room or deciding interesting meals to cook for the week. Because I'm thinking aloud instead of staring at my phone, my ideas flow much easier and faster, especially with ChatGPT's instant follow-ups. It helps keep the momentum rolling until I've got a polished idea for whatever I'm brainstorming. 7. Instant summaries you can listen to Drop a 90-page PDF in the chat, like for a movie script or textbook, ask for a summarization and have the AI read it aloud to you while you fold laundry. It's like turning any document (I even do Wikipedia pages) into a podcast -- on demand. Voice Mode isn't just a neat trick; it's a quick and more natural way to use ChatGPT. Whether you're translating street signs, brainstorming an idea or catching up on the news aloud, talking to ChatGPT feels less like using a chatbot and more like having a conversation with a bite-sized expert. Once you get used to thinking out loud, you might never go back to your keyboard.