
Can AI really replace your keyboard and mouse?
'Hey ChatGPT, left-click on the enter password field in the pop-up window appearing in the lower left quadrant of the screen and fill XUS&(#($J, and press Enter.'
Fun, eh? No, thanks. I'll just move my cheap mouse and type the 12 characters on my needlessly clicky keyboard, instead of speaking the password out loud in my co-working space.
Recommended Videos
It's pretty cool to see ChatGPT understand your voice command, book a cheap ticket for eight people to watch a Liverpool match at Anfield, and land you at the checkout screen. But hey, will you trust it with the password? Or, won't you just type the password with a physical keyboard?
Imagine going all-in on AI, only to realize that the last-mile step, where you REALLY need a keyboard or mouse, is not possible, and you're now stuck. But that's exactly the question many have been asking after seeing flashy AI agents and automation videos from the likes of Google, OpenAI, and Anthropic.
It's a legitimate question
AI was the overarching theme at Google's I/O event earlier this year. By the end of the keynote, I was convinced that Android smartphones are not going to be the same again. And by that extension, any platform where Gemini is going to land — from Workspace apps such as Gmail to navigation on Google Maps while sitting in a car.
The most impressive demo was Project Mariner, and the next research prototype of Project Astra. Think of it as a next-gen conversational assistant that will have you talk and get real stuff done, without ever tapping on the screen or pulling up the keyboard. You can shift your queries from a user manual hosted on a brand's website to instructional YouTube videos, without ever repeating the context.
It's almost as if the true concept of memory has arrived for AI. In a web browser, it's going to book you tickets, landing you on the final page where you simply have to confirm if all the details are as requested, and you proceed with the payment. That leads one to wonder whether the keyboard and mouse are dead concepts for digital inputs as voice interactions come to the forefront of AI.
The burden of error
Now, as odd as that sounds, your computer already comes with voice-based control for navigating through the operating system. On Windows PCs and macOS, you can find the voice access tools as part of the accessibility suite. There are a handful of shortcuts available to speed up the process, and you can create your own, as well.
With the advent of next-gen AI models, we're talking about ditching the keyboard and mouse for everyone, and not just pushing it as an assistive technology.
Imagine a combination of Claude Computer Use and the eye-tracked input from Apple's Vision Pro headset coming together. In case you're unfamiliar, Anthropic's Computer Use is a, well, computer use agent. Anthropic says it lets the AI 'use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text.'
Now, think of a scenario where your intent is given as voice to Claude, picked up by the onboard mics, and the task is executed. For whatever final step is required of you, gestures fill the gap. The Vision Pro has demonstrated that eye-tracked controls are possible and work with a high degree of accuracy.
Away from headsets, voice-controlled AI can still work on an average computer. Hume AI, in partnership with Anthropic, is building a system called Empathetic Voice Interface 2 (EVI 2) that turns voice commands into computer input. It's almost like talking to Alexa, but instead of ordering broccoli, the AI assistant understands what we are saying and turns it into keyboard or mouse input.
All that sounds terrific, but let's think of a few realistic scenarios. You will need a keyboard for fine-tuned media edits. Making minor changes to a coding canvas. Filling cells in a sheet. Imagine saying, 'Hey Gemini, put four thousand eight hundred and ninety-five dollars in cell D5 and label it as air travel expense?' Yeah, I know. I'd just type it, too.
The last mile, not the end
If you go through demos of AI Mode in Search, the Project Mariner agent, and Gemini Live, you will get a glimpse of voice computing. All these AI advancements sound stunningly convenient, until they're not. For example, at what time does it get too irritating to say things like 'Move to the dialog box in the top-left corner and left click on the blue button that says Confirm.'
It's too cumbersome, even if all the steps before it were performed autonomously by an AI.
And let's not forget the elephant in the room. AI has a habit of going haywire. 'At this stage, it is still experimental—at times cumbersome and error-prone,' warns Anthropic about Claude Computer Use. The situation is not too dissimilar from OpenAI's Operator Agent, or a similar tool of the same name currently in development at Opera, the folks behind a pretty cool web browser.
Removing the keyboard and mouse from an AI-boosted computer is like driving a Tesla with full self-driving (FSD) enabled, but you no longer have the steering and the controls available are the brake and accelerator pedals. The car is definitely going to take you somewhere, but you need to take over if some unexpected event transpires.
In the computing context, think of the troubleshooter, where you MUST be in the driving seat. But let's assume that an AI model, driven primarily by voice (and captured by the mic on your preferred computing machine), lands you at the final step where you need to close the workflow, like making a payment.
Even with Passkeys, you will need to at least confirm your identity by entering the password, opening an authenticator app, or touching a fingerprint sensor? No OS-maker or app developer (especially dealing with identity verification) would let an AI model have open control over handling this critical task.
It's just too risky to automate with an AI agent, even with conveniences like Passkeys coming into the picture. Google often says the Gemini will learn from memory and your own interactions. But it all begins with actually letting it monitor your computer usage, which is fundamentally reliant on keyboard and mouse input. So yeah, we're back to square one.
Go virtual? It's a long wait
When we talk about replacing the computer mouse and keyboard with AI (or any other advancement), we are merely talking about substituting them with a proxy. And then landing at a familiar replacement. There is plenty of research material out there talking about virtual mice and keyboard, dating back at least a decade, long before the seminal 'transformers' paper was released and pushed the AI industry into the next gear.
In 2013, DexType released an app that tapped into the tiny Leap Motion hardware to enable a virtual typing experience in the air. No touch screen required, or any fancy laser projector like the Humane AI Pin. Leap Motion died in 2019, but the idea didn't. Meta is arguably the only company that has a realistic software and hardware stack ready for an alternative form of input-output on computing, something it calls human-computer interaction (HCI).
The company has been working on wrist-worn wearables that enable an entirely different form of gesture-based control. Instead of tracking the spatial movement of fingers and limbs, Meta is using a technique called electromyography (EMG). It turns electrical motor nerve signals generated in the wrist into digital input for controlling devices. And yes, cursor and keyboard input are very much part of the package.
At the same time, Meta also claims that these gestures will be faster than a typical key press, because we are talking about electrical signals traveling from the hand straight to a computer, instead of finger movement. 'It's a much faster way to act on the instructions that you already send to your device when you tap to select a song on your phone, click a mouse or type on a keyboard today' says Meta.
Fewer replacements, more repackaging
There are two problems with Meta's approach, with or without AI coming into the picture. The concept of a cursor is still very much there, and so is the keyboard, even though in a digital format. We are just switching from the physical to virtual. The replacement being pushed by Meta sounds very futuristic, especially with Meta's multi-modal Llama AI models coming into the picture.
Then there's the existential dilemma. These wearables are still very much in the realm of research labs. And when they come out, they won't be cheap, at least for the first few years. Even barebones third-party apps like WowMouse are bound to subscriptions and held back by OS limitations.
I can't imagine ditching my cheap $100 keyboard with an experimental device for voice or gesture-based input, and imagine it replacing the full keyboard and mouse input for my daily workflow. Most importantly, it will take a while before developers embrace natural language-driven inputs into their apps. That's going to be a long, drawn-out process.
What about alternatives? Well, we already have apps such as WowMouse, which turns your smartwatch into a gesture recognition hub for finger and palm movements. However, it only serves as a replacement for cursor and tap gestures, and not really a full-fledged keyboard experience. But again, letting apps access your keyboard is a risk that OS overlords will protest. Remember keyloggers?
At the end of the day, we are at a point where the conversational capabilities of AI models and their agentic chops are making a huge leap. But they would still require you to go past the finish line with a mouse click or a few key presses, instead of fully replacing them. Also, they're just too cumbersome when you can hit a keyboard shortcut or mouse instead of narrating a long chain of voice commands.
In a nutshell, AI will reduce our reliance on physical input, but won't replace it. At least, not for the masses.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
32 minutes ago
- Forbes
More Colleges Freeze Hiring And Suspend Salary Increases
Colleges and universities continue to look for ways to cut spending because of the Trump Administration's policies towards higher education. One June 2nd, Johns Hopkins University announced a set of policies to prepare for a possible decline in revenue. They join a list of schools including Brown University, Duke University, Harvard University, the University of Pennsylvania, the University of Washington, and the University of California system, that have temporarily paused hiring and vow to hold off on capital spending. Hopkins has already seen $850 million in grant cuts resulting from the culling of USAID and other program terminations, plus the school has a large number of international students (many who pay full tuition) who may be dissuaded from studying in the U.S. due to the Administration's more restrictive visa policies. Like a number of elite universities, Johns Hopkins relies heavily on tuition from international students attending its undergraduate and graduate programs. In the 2024-2025 academic year, over ten thousand foreign students were enrolled at Johns Hopkins according to the Institute of International Education. In addition to the hiring freeze, University President Ron Daniels also announced a pause in annual pay increases for employees earning $80,000 or more, a slowing of capital projects by 10% to 20%, and spending cuts for travel, events, food, and supplies. The salary freezes will likely squeeze faculty, who have seen real wages decline. From 2013 to 2023, average pay for faculty (when adjusted for inflation) has decreased by 1.5%, while administrative pay has risen by 4% for the same time frame. In addition, administrative staff positions at most universities and colleges has grown faster than faculty. There is no doubt the universities and colleges need to take steps to address the impact of the Trump Administration's policies. Although painful, the policies issued by Johns Hopkins and other schools are necessary in the near-term. But a different approach is needed long-term. One that doesn't rely on higher student tuition or faculty salaries that fall further behind inflation. These policies should include: The Trump Administration's policies towards colleges and their students may eventually be rolled back or reversed at some point, but in the meantime, higher education needs to rethink their budgets and what should be prioritized.

Wall Street Journal
an hour ago
- Wall Street Journal
How the Cybertruck Came to Embody Tesla's Problems
The bromance between Elon Musk and President Trump is ending at a difficult time for Tesla TSLA 3.67%increase; green up pointing triangle. The electric-vehicle maker lost roughly $150 billion of market value Thursday—its biggest ever drop—after the Tesla CEO and Trump traded insults. Sales of Teslas have slumped this year. Tariffs could disrupt the supply of key components. The sprawling Republican tax-and-spending bill would end tax credits for EV buyers. And Tesla's Cybertruck has been a disappointment. Musk set high expectations for the Cybertruck, telling investors it would be Tesla's 'best product ever.' The angular, stainless steel pickup was supposed to generate buzz for Tesla by showcasing new technology and unlocking the lucrative truck market. Instead, it has become synonymous with Musk's polarizing stint in politics, exposing some owners to graffiti or middle fingers from other drivers. And its reputation has been tarnished among Tesla fans because of a spate of recalls and manufacturing issues that have resulted in cycles of repairs. In the U.S., the company sold fewer than 40,000 Cybertrucks in 2024—well below Musk's ultimate goal of 250,000 a year. In the first quarter of 2025, Tesla sold around 7,100 Cybertrucks in the U.S., according to registration data from S&P Global Mobility. Ford's F-150 Lightning pickup outsold it. In an effort to boost sales, Tesla has rolled out lower-price versions of the truck and started offering buyers incentives such as 0% financing and free upgrades. Almost as soon as the $100,000 Cybertruck hit the road, quality problems began to multiply. Reports on social media cited cracked windshields and spotting from so-called rail dust, orange discoloration similar to rust. In its first year, Tesla recalled the truck seven times to fix dangerous defects. In March, with large metal panels falling off the trucks, the tally rose to eight. Some of the quality problems were known and documented internally before the truck went on sale, including issues with the accelerator pad and windshield wiper that later triggered recalls, said former employees who worked on the Cybertruck. But there was pressure inside Tesla to get the truck to market quickly, according to these employees. Tesla didn't respond to requests for comment. 'Elon Musk will tell you the biggest professional mistake was the falcon doors on the Model X,' said David Fick, a longtime Tesla owner who got his Cybertruck in March. He referred to the complex door design that opens upward and hinges at the roof. 'I believe that the Cybertruck is going to go down as an even bigger corporate stumble.' The retired banker in Boynton Beach, Fla., chose to wait more than a year to buy his Cybertruck, hopeful that many of the biggest issues would be identified before he drove his off the lot. 'They do a lot of bleeding-edge stuff where they rush to the market and then you're a beta tester as an owner,' Fick said. He paid about $72,000 for the car, plus $7,300 for window tinting and a custom wrap for exterior trim panels known as cant rails, covering his new car in a metallic maroon color. Soon after, Tesla recalled cant rails because they could become unglued. 'I've had tons of recalls on my Teslas over the years,' said Fick, who added that the cars are worth the hassle. 'Eighty percent were fixed by [software] updates, but these are physical things we are dealing with now.' Musk unveiled the prototype for the Cybertruck in 2019. At the time, he said it would cost $39,900, with a battery range of up to 500 miles—an ambitious combination that would be a stretch for any EV maker. Work on the vehicle was delayed a couple of years, leaving engineering and manufacturing teams with only a few months to do final testing before the trucks went to customers, former employees said. Musk tried to temper expectations around how quickly Tesla could increase production, given its unique design. 'There is always some chance that Cybertruck will flop, because it is so unlike anything else,' he wrote on social media in July 2021. Still, he promoted some of its most unusual features, including his dream of making the car amphibious. Former employees said they took Musk's social posts as orders, but the engineering proved difficult. By 2022, it was clear internally that Cybertruck wouldn't be able to meet all Musk's criteria, so engineers scrapped an early design and started over—developing a smaller, landlocked version of the truck, the people said. After about a year and a half of testing, Tesla delivered the first Cybertrucks to a dozen or so customers in late November 2023. An early version of the truck started at $100,000 and had an estimated range of 318 miles. Two months later, Tesla issued its first recall on the vehicle: a software update that required the company to increase the size of the font on a warning system used across its fleet. It was the first of three recalls that Tesla addressed on the Cybertruck through over-the-air updates to its software. Cybertruck's problems couldn't be fixed by software updates alone. In April 2024, Tesla issued a recall for the accelerator pedal. The company had received a notice from a customer complaining that the accelerator had gotten stuck. Tesla found that the pad attached to the long pedal could dislodge and get stuck in the trim above the pedal, causing the car to accelerate. An internal investigation found the issue was the result of an 'unapproved change,' in which Tesla employees used soap as a lubricant to attach the pad, according to the recall notice. Inside Tesla, the accelerator pad had been a known issue starting with the prototype, according to an employee who worked on the part. The manufacturing team also identified the part as problematic, this person said. Tesla also had problems with the Cybertruck's expansive windshield, which measured nearly 6 square feet. Sometimes the heavy glass would break, two employees said. The glass either arrived cracked from the supplier in Mexico or from handling at the Austin, Texas, facility, they said. Some owners took to social media to describe the glass cracking as soon as they drove off the lot, or while they wiped the inside of their windshield. The windshield required a large windshield wiper measuring 50 inches long. In June 2024, Tesla issued a recall on the wipers, whose motors Tesla found had been overstressed by testing. The wiper had been flagged nearly a year before, two people who worked on the Cybertruck said. It was one of the first issues identified on the vehicles, at which point it was classified as a 'gating issue,' which meant that it needed to be resolved before production could move forward. Reid Tomasko, a 25-year-old YouTube creator, took his Cybertruck on a cross-country trip, during which it performed perfectly, he said. Then came winter in New Hampshire. He was driving near his home in Lebanon, N.H., in February when a metal panel flew off the side of his truck. In March, Tesla issued a recall affecting most of the Cybertrucks it had produced—more than 46,000. The problem involved adhesive that could become brittle in extreme weather, causing exterior trim panels called cant rails to dislodge. Inspecting his truck, Tomasko said he found loose connections on almost every panel that used the adhesive, including the large pieces of stainless steel over the rear wheels, the front fender and the front doors. 'I was wondering, why are they not recalling the other panels?' Tomasko said. After replacing several panels, Tesla offered to buy back Tomasko's truck for nearly all of the $102,000 that he paid, he said. He accepted. 'I am planning on getting a newer one for cheaper soon,' he said. Write to Becky Peterson at
Yahoo
an hour ago
- Yahoo
An OpenAI exec says she was diagnosed with breast cancer and that ChatGPT has helped her navigate it
Kate Rouch, OpenAI's chief marketing officer, said she was diagnosed with breast cancer. Rouch said she is expected to make a full recovery and urged other women to prioritize their health. She said she leaned on OpenAI's ChatGPT to navigate her treatment. Kate Rouch, the chief marketing officer at OpenAI, shared on Friday that she was diagnosed with invasive breast cancer weeks after assuming the role, which she called her "dream job," in December. In a thread posted on X, Rouch said she was sharing her story to help other women, adding, "We can't control what happens to us--but we can choose how we face it. My biggest lesson: no one fights alone." Prior to joining OpenAI as the company's first CMO, Rouch was CMO at Coinbase and, before that, spent over a decade at Meta, including as vice president, global head of brand and product marketing. Rouch said she started treatment right around the Super Bowl in February, when OpenAI aired its first-ever ad, and that she has since gone through 13 rounds of chemotherapy while leading OpenAI's marketing team. She wrote that she is expected to make a full recovery. "It has been the hardest season of life — for me, for my husband, and for our two young children," Rouch said, adding she has been supported by OpenAI "at every step." "Silicon Valley can be brutal and transactional. And yet — I've never felt more held," she said, adding that "people showed up in incredible and unexpected ways." Rouch also said OpenAI's ChatGPT has helped her navigate her diagnosis and treatment, including by explaining cancer in a way that is age-appropriate for her kids, helping her manage the side effects of chemo, and creating custom meditations. "Experiencing our work as a patient has made OpenAI's mission feel more personal and important," she said. Rouch said she was sharing her story to encourage other women to "prioritize their health over the demands of families and jobs." "A routine exam saved my life. It could save yours, too," she said. Business Insider reached out to OpenAI for comment. Kevin Weil, the chief product officer at OpenAI, expressed support for Rouch in a reply to her thread. "We love you @kate_rouch!" he wrote. "Proud of you for telling your story and for being so full of fight." Read the original article on Business Insider