
AI Isn't Fully Automated — It Runs on Hidden Human Labor
Welcome to Tech Times' AI EXPLAINED, where we look at the tech of today and tomorrow. Brought to you by
Imagine this scenario, one that's increasingly common: You have a voice AI listen to your meeting at work, you get a summary and analysis of that meeting, and you assume AI did all the work.
In reality, though, none of these tools work alone. PLAUD AI, Rabbit, ChatGPT, and more all rely on a layer of human labor that most of us don't hear about. Behind that clean chat interface on your phone or computer, there are data labelers that tag speech samples, contractors that rate AI answers , and testers feeding the system more examples to learn from. Some are highly trained while others focus on more of the tedious aspects of the work. No matter what, though, your AI isn't just automated - it's a complex blend of code and human effort. Without it, your AI wouldn't work at all. The Invisible Workforce Behind Everyday AI
AI tools don't just appear out of thin air, of course. They learn similarly to the way we do: by example. That learning process often relies on what's called human-in-the-loop (HITL) training.
As data-annotation company Encord says in a blog post:
"In machine learning and computer vision training, Human-in-the-Loop (HITL) is a concept whereby humans play an interactive and iterative role in a model's development. To create and deploy most machine learning models, humans are needed to curate and annotate the data before it is fed back to the AI. The interaction is key for the model to learn and function successfully," the company wrote.
Annotators, data scientists, and data operations teams play a significant role in collecting, supplying, and annotating the necessary data, the post continued. The amount of human input varies with how involved the data is and how much human interaction it will be expected to offer.
Of course, as with many business activities, there are ethical concerns. Many content moderators complain of low pay and traumatic content to review. There can also be a language bias in AI training , something researchers and companies are likely working on to solve as AI becomes more complex and global. Case Study: PLAUD AI
Various ways users wear the PLAUD Note device—on a wristband, clipped to a lapel, or hanging as a pendant—highlighting its flexibility for hands-free voice capture throughout the day. PLAUD AI
PLAUD AI's voice assistant offers an easy, one-button experience. Just press a button, speak, and then let it handle the rest. As the company said on its website , the voice assistant lets you "turn voices and conversations into actionable insights."
Behind the scenes, this "magic" started with pre-trained automatic speech recognition (ASR) models like Whisper or other custom variants , that have been refined with actual user recordings. The models not only have to transcribe words, but also try to understand the structure, detect speakers , and interpret tone of voice. The training involves hours and hours of labeled audio and feedback from real conversations. It's likely that every time you see an improvement in the output, it's thanks to thousands of micro-adjustments based on user corrections or behind-the-scenes testing.
According to reviewers, PLAUD AI leverages OpenAI's Whisper speech-to-text model running on its own servers. There are likely many people managing the PLAUD AI version of the model for its products, too. Every neat paragraph that comes out of the voice assistant likely reflects countless iterations of fine-tuning and A/B testing by prompt engineers and quality reviewers. That's how you get your results without having to deal with all that back-end work yourself. Case Study 2: ChatGPT and Otter.ai
The ChatGPT logo represents one of the most widely used AI assistants—powered not just by models, but by human trainers, raters, and user feedback. ilgmyzin/Unsplash
When you use ChatGPT, it can feel like an all-knowing helpful assistant with a polished tone and helpful answers. Those are based, of course, on a foundation of human work. OpenAI used reinforcement learning from human feedback , or RLHF, to train its models. That means actual humans rating responses so the system could learn what responses were the most helpful or accurate, not to mention the most polite.
"On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior and rank several outputs from our models," wrote the company in a blog post . "We then use(d) this data to fine-tune GPT‑3."
Otter.ai, a popular online voice transcription service, also relies on human work to improve its output. It doesn't use RLHF like OpenAI does, but it does include feedback tools for users to note inaccurate transcriptions, which the company then uses to fine-tune its own models.
The company also uses synthetic data (generated pairs of audio and text) to help train its models, but without user corrections, these synthetic transcripts can struggle with accents, cross talk, or industry jargon; things only humans can fix. Case Study 3: Rabbit R1's Big Promise Still Needs Human Help
The Rabbit R1 made a splash with its debut: a palm-sized orange gadget promising to run your apps for you, no screen-tapping required. Just talk to it, and it's supposed to handle things like ordering takeout or cueing up a playlist. At least, that's the idea.
Rabbit says it built the device around something called a Large Action Model (LAM), which is supposed to "learn" how apps work by watching people use them. What that means in practice is that humans record themselves doing things like opening apps, clicking through menus, or completing tasks and those recordings become training data. The R1 didn't figure all this out on its own; it was shown how to do it, over and over.
Since launch, people testing the R1 have noticed that it doesn't always feel as fluid or "intelligent" as expected. Some features seem more like pre-programmed flows than adaptive tools. In short, it's not magic—it's a system that still leans on human-made examples, feedback, and fixes to keep improving.
That's the pattern with almost every AI assistant right now: what feels effortless in the moment is usually the result of hours of grunt work—labeling, testing, and tuning—done by people you'll never see. AI Still Relies On Human Labor
For all the talk of artificial intelligence replacing human jobs, the truth is that AI still leans hard on human labor to work at all. From data labelers and prompt raters to everyday users correcting transcripts, real people are constantly training, guiding, and cleaning up after the machines. The smartest AI you use today is only as good as the humans behind it. For now, that's the part no algorithm can automate away.
Originally published on Tech Times

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
3 hours ago
- Int'l Business Times
Meta AI 'Discover' Feed Unveils What Users Ask the AI, Posting Sensitive Info Tied to Their Accounts
The majority of popular experiences with AI chatbots are private . Still, Meta AI's app is not exactly exclusive to you, especially with its Discover feed, which posts conversations on the app for others to read. The good thing is that making your chatbot conversations with Meta AI available on the Discover feed is entirely optional; however, those who are less informed may not be aware if their discussion has been made public. Despite chatbots being viewed as 'artificial' conversationalists, people have put their trust in them and are already oversharing with the AI, as chats with them are often perceived as private. However, that is not precisely the case with Meta AI. Meta AI's Discover Feed Unveils What Users Ask the AI If you are one of those people who overshare personal and sensitive information with chatbots, as you wish to ask them for advice or other needs, Meta AI has a feature on its app that could post this publicly. According to 9to5Google, Meta AI's Discover feed is now filled with numerous user conversations with the chatbot that have been made publicly available to users. For some, it may be a form of entertainment, as they get to see the unhinged conversations that specific users have had with Meta AI, and according to reports, other users may like and comment on these posts. The good thing is that, for those who have overshared on their chatbot conversations, users who have come across their posts are gently advising them to take them down, as they contain sensitive information about their concerns and more. Discover Feed: Posts Sensitive Info Tied to Accounts Sharing the chatbot conversations on the Discover feed is optional, and the Meta AI app would not post the chats for others to see without users prompting it. However, as revealed in the report, some users may not know that clicking on 'Share' publicly broadcasts their AI experiences to the world, and at the same time, it is tied to their Facebook and Instagram accounts. Meta AI and the Privacy Concerns About It Over the many years of Meta's (formerly Facebook) operations, privacy and security have been issues for the company, as many users, experts, and even former employees have raised concerns about how the company handles these matters. When Meta's AI era arrived, these issues were once again raised against the company because it was revealed that they would use public posts from Facebook and Instagram to train their models and experiences. As a result, another trend against the company went viral on their social media platforms, with users sharing the movement known as "Goodbye Meta AI," which allegedly aims to protect their data. However, this is not the case, and sharing content related to this will not shield them from the company's efforts to improve its AI models and technology. Before Meta's significant transition to offering artificial intelligence technology, the company faced substantial privacy and security issues with user data, with whistleblowers going as far as to claim that the company disregards safety. Meta AI's app offers a significant experience for all chatbot needs. Still, the new Discover feed could post one's oversharing or sensitive information publicly if they choose to share it. Originally published on Tech Times


Int'l Business Times
5 hours ago
- Int'l Business Times
Google Earthquake Detection Comes to Wear OS Watches; Life-Saving Alerts Now on Your Wrist
You'll never know that someday your smartwatch can save you, surprisingly. With regards to safety-related updates, Google is rolling out one of its most important Android features: the earthquake early warning system to Wear OS smartwatches. Initially released in 2020 for Android smartphones, the feature leverages the strength of distributed sensors on mobile devices to identify seismic motion and warn users mere seconds before an earthquake hits. Smartwatch owners will now also be given these life-saving warnings, providing them with an added advantage in safety during natural disasters. How Useful are Google's Earthquake Alerts Google Earthquake Detection Comes to Wear OS Watches; Life-Saving Alerts Google's earthquake detection system uses the accelerometers in Android phones as mini seismometers. When numerous devices shake at the same time, the system estimates the location and magnitude of the quake using crowdsourced information. According to Android Authority, there are two types of alerts depending on how strong the quake was: Be Aware Alert will be activated in response to light shaking. This alert softly comes without overwriting Do Not Disturb or sound options. Take Action Alert is for more serious cases. For more intense quakes with moderate or extreme shaking, a loud alarm and red warning screen are activated even if your phone is set to silent. It also shows safety tips, providing users valuable seconds to seek shelter. Now that these alerts are making their way to Wear OS, smartwatch users can receive real-time notifications. Receiving safety updates is critical even if your phone isn't in your hand. Earthquake Alerts Now on Your Wrist As per Google's June 2025 Google Play services v25.21 update, Android's earthquake-detecting system is being rolled out to Wear OS devices. This will mean that when an earthquake is detected, particularly those with a magnitude of 4.5 or higher, users are alerted via their wrist. The alerts should display: The approximate magnitude of the earthquake The epicenter distance An easy-to-view visual warning interface Though Google has not yet officially announced the rollout, the update notes indicate that it's in the pipeline—either this month as part of the update or further down the line with Wear OS 6. Smart Safety on Smart Devices The typical individual might ignore earthquake warnings until they really do need them, but in that instant, they could be the difference between life and death. Wearables are already a part of everyday life for notifications, health, and activity tracking. Enabling real-time natural disaster alerts is a huge step in mobile emergency preparedness. For residents of earthquake-risk zones like California, Japan, the Philippines, and Turkey, this release provides an effective and easy-to-use means of enhancing response time in the case of a quake. As they always say, earthquakes are unpredictable, so preparation is always the key. Originally published on Tech Times


Int'l Business Times
a day ago
- Int'l Business Times
AI Isn't Fully Automated — It Runs on Hidden Human Labor
Welcome to Tech Times' AI EXPLAINED, where we look at the tech of today and tomorrow. Brought to you by Imagine this scenario, one that's increasingly common: You have a voice AI listen to your meeting at work, you get a summary and analysis of that meeting, and you assume AI did all the work. In reality, though, none of these tools work alone. PLAUD AI, Rabbit, ChatGPT, and more all rely on a layer of human labor that most of us don't hear about. Behind that clean chat interface on your phone or computer, there are data labelers that tag speech samples, contractors that rate AI answers , and testers feeding the system more examples to learn from. Some are highly trained while others focus on more of the tedious aspects of the work. No matter what, though, your AI isn't just automated - it's a complex blend of code and human effort. Without it, your AI wouldn't work at all. The Invisible Workforce Behind Everyday AI AI tools don't just appear out of thin air, of course. They learn similarly to the way we do: by example. That learning process often relies on what's called human-in-the-loop (HITL) training. As data-annotation company Encord says in a blog post: "In machine learning and computer vision training, Human-in-the-Loop (HITL) is a concept whereby humans play an interactive and iterative role in a model's development. To create and deploy most machine learning models, humans are needed to curate and annotate the data before it is fed back to the AI. The interaction is key for the model to learn and function successfully," the company wrote. Annotators, data scientists, and data operations teams play a significant role in collecting, supplying, and annotating the necessary data, the post continued. The amount of human input varies with how involved the data is and how much human interaction it will be expected to offer. Of course, as with many business activities, there are ethical concerns. Many content moderators complain of low pay and traumatic content to review. There can also be a language bias in AI training , something researchers and companies are likely working on to solve as AI becomes more complex and global. Case Study: PLAUD AI Various ways users wear the PLAUD Note device—on a wristband, clipped to a lapel, or hanging as a pendant—highlighting its flexibility for hands-free voice capture throughout the day. PLAUD AI PLAUD AI's voice assistant offers an easy, one-button experience. Just press a button, speak, and then let it handle the rest. As the company said on its website , the voice assistant lets you "turn voices and conversations into actionable insights." Behind the scenes, this "magic" started with pre-trained automatic speech recognition (ASR) models like Whisper or other custom variants , that have been refined with actual user recordings. The models not only have to transcribe words, but also try to understand the structure, detect speakers , and interpret tone of voice. The training involves hours and hours of labeled audio and feedback from real conversations. It's likely that every time you see an improvement in the output, it's thanks to thousands of micro-adjustments based on user corrections or behind-the-scenes testing. According to reviewers, PLAUD AI leverages OpenAI's Whisper speech-to-text model running on its own servers. There are likely many people managing the PLAUD AI version of the model for its products, too. Every neat paragraph that comes out of the voice assistant likely reflects countless iterations of fine-tuning and A/B testing by prompt engineers and quality reviewers. That's how you get your results without having to deal with all that back-end work yourself. Case Study 2: ChatGPT and The ChatGPT logo represents one of the most widely used AI assistants—powered not just by models, but by human trainers, raters, and user feedback. ilgmyzin/Unsplash When you use ChatGPT, it can feel like an all-knowing helpful assistant with a polished tone and helpful answers. Those are based, of course, on a foundation of human work. OpenAI used reinforcement learning from human feedback , or RLHF, to train its models. That means actual humans rating responses so the system could learn what responses were the most helpful or accurate, not to mention the most polite. "On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior and rank several outputs from our models," wrote the company in a blog post . "We then use(d) this data to fine-tune GPT‑3." a popular online voice transcription service, also relies on human work to improve its output. It doesn't use RLHF like OpenAI does, but it does include feedback tools for users to note inaccurate transcriptions, which the company then uses to fine-tune its own models. The company also uses synthetic data (generated pairs of audio and text) to help train its models, but without user corrections, these synthetic transcripts can struggle with accents, cross talk, or industry jargon; things only humans can fix. Case Study 3: Rabbit R1's Big Promise Still Needs Human Help The Rabbit R1 made a splash with its debut: a palm-sized orange gadget promising to run your apps for you, no screen-tapping required. Just talk to it, and it's supposed to handle things like ordering takeout or cueing up a playlist. At least, that's the idea. Rabbit says it built the device around something called a Large Action Model (LAM), which is supposed to "learn" how apps work by watching people use them. What that means in practice is that humans record themselves doing things like opening apps, clicking through menus, or completing tasks and those recordings become training data. The R1 didn't figure all this out on its own; it was shown how to do it, over and over. Since launch, people testing the R1 have noticed that it doesn't always feel as fluid or "intelligent" as expected. Some features seem more like pre-programmed flows than adaptive tools. In short, it's not magic—it's a system that still leans on human-made examples, feedback, and fixes to keep improving. That's the pattern with almost every AI assistant right now: what feels effortless in the moment is usually the result of hours of grunt work—labeling, testing, and tuning—done by people you'll never see. AI Still Relies On Human Labor For all the talk of artificial intelligence replacing human jobs, the truth is that AI still leans hard on human labor to work at all. From data labelers and prompt raters to everyday users correcting transcripts, real people are constantly training, guiding, and cleaning up after the machines. The smartest AI you use today is only as good as the humans behind it. For now, that's the part no algorithm can automate away. Originally published on Tech Times