Latest news with #AIassistants


WIRED
06-07-2025
- WIRED
How to Use Voice Typing on Your Phone
When it's easier to talk than type, Android and iOS have you covered. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. With the rise of AI assistants like Siri, Alexa, and Gemini, we're all now well used to talking to our gadgets. But what you might not realize is that you can actually talk to type anywhere that a text-input box pops up. This can come in handy in a variety of situations—perhaps you've got your hands full of groceries, or you're holding onto a subway rail. Maybe your phone is out of reach, or the screen's cracked and keyboard doesn't work as well as it should. Or maybe being hunched over a tiny screen to compose a message is just not your idea of fun. That is where voice typing can help. It's not an either-or situation either: you can switch between typing and talking as you need. Here's how to do it on Android and iOS, any time the keyboard pops up—whether it's your email app or a web form. Android On Pixel phones and many other Android handsets, the default keyboard is Gboard. When it pops up on screen, tap the mic icon (far right) to enable voice typing, and you can start talking. For more screen real estate, tap the downward arrow to the left of the mic icon. What you say next is going to depend on which app you're in and what you're doing. You can use 'delete' to erase the last word and 'clear all' to wipe the input box fully. The 'next' and 'previous' commands will move you between text fields, while emojis can be spoken out as well (like 'thumbs up emoji'). When your text looks good, you'll be prompted what to say next: Keep an eye on the suggestions under the input box. If you're in a messaging app then you'll typically be prompted to say 'send,' whereas if you're looking through a batch of photos for something you might have to say 'search.' Voice typing in action on Android. Courtesy of David Nield Tap the small 'i' icon on the left of the toolbar if you need more prompts about the voice commands you can use. If you want to keep on using voice typing in other input boxes and across other apps until you turn it off, double-tap the mic icon. If voice typing doesn't work, check that it's enabled (it should be, by default): From Settings on Android, pick System > Keyboard > On-screen keyboard > Gboard > Voice typing. The same screen lets you enable offline access for the feature, and enable or disable automatic punctuation breaks as you talk. The voice typing process is similar on other types of Android handset, though it might not be identical. On Galaxy phones, the Samsung Keyboard is the default, and with this keyboard the mic icon you need to tap is down in the lower left corner. To make sure voice typing is an option, from Settings choose General management > Samsung keyboard > Voice input. iOS Over on the iPhone, you've also got access to voice typing wherever you need it. The default keyboard is the one supplied by Apple, though you can also use alternatives such as Gboard if you wish. The keyboard shows up whenever text needs to be entered, and you can tap the mic button (bottom right) to start talking instead of typing. You can use a variety of commands while you're talking. Just name an emoji (like 'heart emoji') or say the name of a punctuation symbol (such as 'exclamation mark') to insert the character at the current cursor position. It's possible to split text into blocks using the 'new line' and 'new paragraph' commands, which is handy if you're composing a long message. You can also say 'undo' or 'redo' to go backwards or forwards in terms of the words you've dictated. The cursor shows a blue mic on iOS when you're dictating. Courtesy of David Nield Your iPhone also lets you use commands like 'select sentence' and 'delete paragraph' to give you more control over the blocks of text you're working with. Sometimes the precision isn't as good as it could be, but you should be able to compose a large part of most of your messages through this method. However, the iPhone doesn't give you an easy way to submit the text you've entered, like Android does with the 'send' or 'search' commands—though in some cases searches will be automatically triggered once you stop talking. Generally, you need to stop voice dictation (by tapping the mic button or saying 'stop dictation'), and then tap on the button for sending or submitting your text. This should be set up by default with iOS, but if it's not working, open up General > Keyboard from iOS Settings, and make sure the Enable Dictation toggle switch is turned on. The other options here, such as automatic punctuation, are applied whether you're speaking out or typing your text.


Forbes
12-05-2025
- Forbes
5 AI Agent Myths You Need To Stop Believing Now
AI agents represent the next frontier beyond chatbots, capable of taking autonomous actions that ... More could transform how we work and live. The latest buzz of excitement in the world of business and consumer technology is all around AI agents. These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, interfacing with other tools and services to complete complex tasks. The technology hasn't quite reached the watershed moment where it has broken through into the mainstream yet, as happened with LLM chatbots a couple of years back when ChatGPT was released. But make no mistake, it's on its way, and its impact is going to be huge, as we increasingly turn to AI assistants to help us out in all aspects of life. There's still a lot of confusion around the subject, though. So let's clear up five myths around the topic of agentic AI. Agents have one fundamental quality that sets them apart from and above chatbots; they don't just talk the talk, they walk the walk. This means they can take action, specifically computer-based actions like interacting with websites, digital services and software. When you think about how many of life's tasks we handle in that way, that's potentially quite a lot of work they can take off our hands. From a technical point of view, this is possible because rather than being powered by one monolithic language-processing tool, like a chatbot, they're made up of many independent tools and applications, all specializing in different tasks. These tools are arranged in a hierarchy, with a powerful LLM tool acting as the project manager, delegating tasks to whatever will get the job done. Chatbots are based on the same technology as agents, but just provide responses to our prompts. Agents will result in AI becoming more integrated with our daily lives in useful and meaningful ways. It's true that in these early days, the first agentic consumer-facing tools, like OpenAI Operator, were a little limited. In theory, though, AI agents will eventually be capable of taking care of just about any task we usually use a smartphone for. This could include managing our schedules, shopping for groceries, making travel arrangements, arranging appointments for services like healthcare or car maintenance, booking taxis, managing our bank accounts, and countless other things. This is likely to happen quite quickly, given the speed at which previous AI technologies have improved. Compare ChatGPT's capabilities when it was released to what it can do now. In just over two years, it's evolved to have memory, web browsing, vision, speech, and now agentic capabilities. So while agentic AI mainly browses the web, shops online, and designs simple websites today, tomorrow it could generate entire creative works like movies, run a business, or build an entire virtual world populated with virtual characters. You might think that it would be difficult to pull a fast one on super-smart agentic AI, but this isn't necessarily proving to be the case. At least one study has found that agents using computer vision to search the web for deals can be tricked into clicking specific links or pop-up ads by making it appear they have the info the AI is looking for. This opens the door to whole new fields of enterprise, ethical and criminal, involving influencing the actions of—or simply subverting—AI agents. Expect devious new forms of cybercrime and fraud involving tricking and manipulating agents. At the same time, new opportunities will open up for legitimate businesses that are able to market effectively to AI agents. With all the terminology around AI, it's often easy to get confused. Agentic AI and artificial general intelligence (AGI) are two topics that are often muddled together, but actually refer to different, if related, concepts. AGI refers to machine intelligence that's able to 'generalize' its knowledge and capabilities, in order to solve any problem, rather than just the type of problems it has been trained to solve (much like humans can). Because it empowers machines to operate more autonomously and solve more complex challenges, as well as creates feedback loops that let them become more knowledgeable as they work, agentic AI can be thought of as potentially a step towards AGI. However, true AGI is still believed to be some way off, although OpenAI CEO Sam Altman thinks we could see it this year. Agentic AI is often described as autonomous because, in theory, it's capable of working without human input or supervision. In practice, though, this isn't a good idea. Remember, AI agents are tools. They can take action on our behalf, but we're always responsible for the results. Agentic AI is very new and frequently makes mistakes, but it still performs worse than humans at many tasks, according to some benchmarks. So, human oversight and accountability are critical. This applies to individuals using agentic consumer apps as much as it does to businesses looking to implement commercial AI agents. We will need to understand what AI companies are doing with our data and how it's being used to train machines to take action or make decisions on our behalf. This means that human oversight, and the ability to step in and intervene when mistakes are made, or blow the whistle on unethical practices, are critical elements of any agentic framework. By understanding that AI agents are more than next-gen chatbots, that their utility is set to grow massively, and that human oversight is non-negotiable, and ethical standards are the responsibility of us all, we can make sure we're ready to benefit from the incoming wave of change they will bring.