logo
How AI Adoption Is Sitting With Workers

How AI Adoption Is Sitting With Workers

Time​ Magazine3 hours ago
T here's a danger to focusing primarily on CEO statements about AI adoption in the workplace, warns Brian Merchant, a journalist-in-residence at the AI Now Institute, an AI policy and research institute.
'There's a wide gulf between the prognostications of tech company CEOs and what's actually happening on the ground,' he says. Merchant in 2023 published Blood in the Machine, a book about how the historical Luddites resisted automation during the industrial revolution. In his substack newsletter by the same name, Merchant has written about how AI implementation is now reshaping work.
To better understand workers' perspectives on how AI is changing jobs, we spoke with Merchant. Here are excerpts from our conversation, edited for length and clarity:
There have been a lot of headlines recently about how AI adoption has led to headcount reductions. How do you define the AI jobs crisis?
There is a real crisis in work right now, and AI poses a distinct kind of threat. But that threat to me, based on my understanding of technological trends in history, is less that we're looking at a widespread, mass-automation, job-wipe-out event and more at a particular set of logics that generative AI gives management and employers.
There are jobs that are uniquely vulnerable. They might not be immense in number, but they're jobs that people think are pretty important—writing and artistic creation and that kind of thing. So you do have those jobs being threatened, but then we also have this crisis where AI supplies managers and bosses with this imperative where, whether or not the AI can replace somebody, it's still being pushed as a justification for doing so. We saw this a lot with DOGE and the hollowing out of the public workforce and the AI-first strategies that were touted over there.
More often than facilitating outright job replacement, automation is used by bosses to break down tasks, deskill labor, or use as leverage against workers. This was true in the Luddites' time, and it's true right now. A lot of the companies that say they're 'AI-first' are merely taking the opportunity to reduce salaried headcount and replace it with cheaper, more precarious contract labor. This is what happened with Klarna, the fintech company that has famously been one of the most vocal advocates of AI anywhere.
[Editor's note: In May, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company was reversing its well-publicized move to replace 700 human call-center workers with AI and instead hiring humans again. 'As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,' Siemiatkowski said.]
After all, firms still need people to ensure the AI output is up to par, edit it, or to 'duct tape it' to make sure it works well enough with existing systems—bosses just figure they can take the opportunity to call that 'unskilled' work and pay the people who are doing it less.
Your project, 'AI Killed My Job,' is an ongoing, multi-part series that dives deeper into how the AI jobs crisis is impacting workers day-to-day. What themes or patterns are emerging from those stories?
I invited workers who have been impacted by AI to reach out and share their stories. The project has just begun, and I've already gotten hundreds of responses at this point. I expected to see AI being used as a tool by management to try to extract more labor and more value from people, to get people to work harder, and to have it kind of deteriorate conditions rather than replace work outright. That's been born out, and that's what I've seen.
The first installment that I ran was around tech workers. Some people have the assumption that the tech industry is a little bit more homogeneous in its enthusiasm for AI, but that's really not the case. A lot of the workers who have to deal with them are not happy with AI and the way that AI is being used in their companies and the impact it's having on their work.
There's a few people [included in the first installment] who have lost their jobs as part of layoffs initiated by a company that has an AI-first strategy, including at CrowdStrike and Dropbox, and I'm hearing from many people who haven't quite lost their jobs yet, but are exponentially concerned that they will. But, by and large, what you're seeing now is managers using AI to justify speeding up work, trying to get employees to use it to be more productive at the expense of quality or the things that people used to enjoy about their jobs.
There are people who are frustrated to see management really encouraging the use of more AI at the expense of security or product quality. There's a story from a Google worker who watched colleagues feed AI-generated code into key infrastructures, which was pretty unsettling to many. That such an important and powerful company that runs such crucial web infrastructure would allow AI-generated code to be used in their systems with relatively few safeguards was really surprising. [Editor's note: A Google spokesperson said that the company actively encourages AI use internally, with roughly 30% of the company's code now being AI generated. They cited CEO Sundar Pichai's estimate that AI has increased engineering velocity by 10% but said that engineers have rigorous code review, security, and maintenance standards.] We're also seeing it being used to displace accountability, with managers using AI as a way to deflect blame should something go wrong, or, 'It's not my fault; it's AI's fault.'
Your book, Blood in the Machine, tells the story of the historical Luddites' uprising against rising automation during the industrial revolution. What can we learn from that era that's still relevant today?
One lesson we can learn from the Luddites is that we should be seeking ways to make more people and stakeholders involved in the process of developing and deploying technology. The Luddites were not anti-technology. They rose up and they smashed the machine because they had no other choice. The deck was stacked against them, and a lot of them were quite literally starving. Collective bargaining was illegal for them. And, just like today, conditions were increasingly difficult as the democratic levers that people can pull to demand a seat at the table were vanishingly few. (I mean, Silicon Valley just teamed up with the GOP to try and get an outright 10-year ban passed on states' abilities to regulate AI). That leads to strife, it leads to anger, it leads to feeling like you don't have a say or any options.
Now, we're looking at artists and writers and content creators and coders and you name it, watching their livelihoods becoming more precarious with worsening conditions, if not getting erased outright. As you squeeze these more and more populations of people, then it's not unthinkable that you would see what happened then happen again in some capacity. You're already seeing the roots of that with people vandalizing Waymo cars, which they see as the agents of big tech and automation. That's a reason employers might want to consider that human element rather than putting the pedal to the metal with regards to AI automation because there's a lot of fear, anxiety, and anger at the way that all of this has taken shape and it's playing out.
What should employers do instead?
When it comes to employers, at the end of the day, if you're shelling out for a bunch of AI, then you're either hoping that your employees will use it to be more productive for you and work harder for you, or you're hoping to get rid of employees. Ideally, the employer would say it's the former. It would trust its employees to know how best to generate more value and make them more productive. In reality, even if a company goes that far, they can still turn around and trim labor costs elsewhere and mandate workers to use AI to pick up laid-off colleagues' workloads and ratchet up productivity. So what you really need is a union contract or something codified in law that you can't just fire people and replace them with AI.
You see some union contracts that include language about the ways that AI or automation can be implemented and when it can't, and what the worker has say over. Right now, that is the best means of giving people power over a technology that's going to affect their working life. The problem with that is we have such low union density in the United States that it limits who can enjoy such a benefit to those who are sort of formally organized. There are also attempts at legislation that put checks on what automation can and can't touch, when AI can be used in the hiring process or what kinds of data it can collect. Overall, there has to be a serious check on the power of Silicon Valley before we can hope to get workers' voices heard in terms of how the technology's affecting them.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple might be building its own AI ‘answer engine'
Apple might be building its own AI ‘answer engine'

Yahoo

time19 minutes ago

  • Yahoo

Apple might be building its own AI ‘answer engine'

Apple has formed a new team to build a ChatGPT-like app, according to according to Bloomberg's Mark Gurman. This team — reportedly called Answers, Knowledge, and Information — is working to build an 'answer engine' that can respond to questions using information from across the web. This could be a standalone app or provide search capabilities in Siri, Safari, and other Apple products. Gurman also notes that Apple is advertising for jobs with this team, specifically looking for applicants who have experience with search algorithms and engine development. While Apple has already integrated ChatGPT into Siri, a more personalized, AI-powered update to the voice assistant has been repeatedly delayed. Apple might also have to alter its search deal with Google as a result of the latter company's antitrust defeat. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Apple might be building its own AI ‘answer engine'
Apple might be building its own AI ‘answer engine'

TechCrunch

timean hour ago

  • TechCrunch

Apple might be building its own AI ‘answer engine'

In Brief Apple has formed a new team to build a ChatGPT-like app, according to according to Bloomberg's Mark Gurman. This team — reportedly called Answers, Knowledge, and Information — is working to build an 'answer engine' that can respond to questions using information from across the web. This could be a standalone app or provide search capabilities in Siri, Safari, and other Apple products. Gurman also notes that Apple is advertising for jobs with this team, specifically looking for applicants who have experience with search algorithms and engine development. While Apple has already integrated ChatGPT into Siri, a more personalized, AI-powered update to the voice assistant has been repeatedly delayed. Apple might also have to alter its search deal with Google as a result of the latter company's antitrust defeat.

The hottest new ChatGPT trend is disturbingly morbid
The hottest new ChatGPT trend is disturbingly morbid

Digital Trends

timean hour ago

  • Digital Trends

The hottest new ChatGPT trend is disturbingly morbid

The rise of AI has helped us make some huge leaps. From helping with medicine research to spotting cancer, the advances enabled by AI have been pretty remarkable. But at the same time, even the most popular AI tools, such as ChatGPT, have gone haywire in the most astounding fashion. Over the past couple of years, reports have detailed how ChatGPT guided a person about murder, accused a person of killing their children, and nudged them into a conspiracy theory spiral. It seems the next hot thing is using ChatGPT to write obituaries of loved ones. Or even building a business atop the massive demand. ChatGPT, for the dead among us According to a report in The Washington Post, funeral homes are using ChatGPT to ' write obituaries all the time without telling their clients.' Of course, they have to do it with a lot of caution, or else ChatGPT will turn the obituaries into unrealistic accounts of how a person passed away among their loved ones or departed the mortal plane peacefully. 'We don't know that it was a peaceful death, though we'd like to imagine it was,' an anonymous employee at a funeral home was quoted as saying. But it's not just funeral homes and some enterprising tech founders that are using AI to write obituaries, while charging for it. Regular folks are using it, too, and seem quite happy about it. Recommended Videos A Nevada resident, who used ChatGPT to write their mother's obituary, told the outlet that 'she'd be very happy with the end result.' The individual has even more ambitious plans for the future when they might have to write an obituary for their father. 'This time I'm gonna use Deep Research mode. It's gonna be a banger,' the individual was quoted as saying by The Post. Some folks who talked with the reporter argued that it's not easy to articulate their feelings in moments of profound grief, and that AI tools like ChatGPT made it easier to write an obituary. All is fair with death and business Interestingly, it seems using AI tools such as ChatGPT is not just a personal choice or a sly act by some funeral homes. It's a booming business, and there are multiple companies out there that are offering 'AI for obituary' services — for a price. One of those companies is CelebrateAlly, founded by a former Microsoft employee, which charges customers $5 for 100 credits. An obituary usually takes 10 credits, which means you can write a fresh eulogy honoring your departed loved one for just fifty cents each. The company even lets users pick between ChatGPT and Anthropic's Claude AI model to change the tone or contents of the obituary. But the underlying technology is not without its faults, and if ignored, it can lead to some bizarre scenarios. Here's a segment from the report: Instructed to write a 'playful' obituary for a spirited, funny and faith-filled fake person, the AI tool said the man had been 'born on a chilly day,' 'lived by the words of the great Groucho Marx,' 'inspired everyone' and died in a 'sunny embrace,' despite being given none of that information. In other prompts, it invented fake nicknames, preferences and life events, even declaring that the man had established a community theater and mentored a 'young comedian … who went on to tour nationally.' ChatGPT is not the only tool making up stuff. Google's Gemini AI told a person to add glue to their pizza. Microsoft's AI is no different. Recent research says that depending too much on AI tools is leading to a cognitive decline and that it hinders real research. Some experts are also concerned about deep psychological and moral issues. AI companion apps, such as Character AI and Nomi, have given rise to a segment of users who are obsessed with their AI-generated partners, at the cost of real human connections. Some are even getting their AI partners pregnant and staying deeply engrossed in their own digital reality, while paying hundreds of dollars to the AI companies behind the software.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store