At Amazon, some programmers say their jobs have begun to resemble warehouse work
At Amazon, which is making big investments in generative AI, the culture of coding is changing rapidly. PHOTO: DAVIDE BONAZZI/NYTIMES
NEW YORK - Since at least the industrial revolution, workers have worried that machines would replace them.
But when technology transformed automaking and even secretarial work, the response typically wasn't to slash jobs and reduce the number of workers. It was to 'degrade' the jobs, breaking them into simpler tasks to be performed over and over at a rapid clip. Small shops of skilled mechanics gave way to hundreds of workers spread across an assembly line. The personal secretary gave way to pools of typists and data-entry clerks.
The workers 'complained of speedup, work intensification and work degradation,' as labour historian Jason Resnikoff described it.
Something similar appears to be happening with artificial intelligence in one of the fields where it has been most widely adopted: coding.
As AI spreads through the labour force, many white-collar workers have expressed concern that it will lead to mass unemployment. Joblessness has ticked up and widespread layoffs might eventually come, but the more immediate downside for software engineers appears to be a change in the quality of their work. Some say it is becoming more routine, less thoughtful and, crucially, much faster paced.
Companies seem to be convinced that, like assembly lines of old, AI can increase productivity. A recent paper by researchers at Microsoft and three universities found that programmers' use of an AI coding assistant called Copilot, which proposes snippets of code that they can accept or reject, increased a key measure of output more than 25 per cent.
At Amazon, which is making big investments in generative AI, the culture of coding is changing rapidly. In his recent letter to shareholders, chief executive Andy Jassy wrote that generative AI was yielding big returns for companies that use it for 'productivity and cost avoidance.' He said working faster was essential because competitors would gain ground if Amazon doesn't give customers what they want 'as quickly as possible' and cited coding as an activity where AI would 'change the norms.'
Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was in 2024, but it was expected to produce roughly the same amount of code by using AI.
Amazon said it conducts regular reviews to make sure teams are adequately staffed and may increase their size if necessary.
Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that 'AI usage is now a baseline expectation' and that the company would 'add AI usage questions' to performance reviews.
Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could 'enhance their overall daily productivity,' according to an internal announcement. Winning teams will receive US$10,000 (S$12,800). A Google spokesperson noted that more than 30 per cent of the company's code is now suggested by AI and accepted by developers.
The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Mr Jassy wrote last year that the company had saved 'the equivalent of 4,500 developer-years' by using AI to do the thankless work of upgrading old software.
Eliminating such tedious work may benefit a subset of accomplished programmers, said Lawrence Katz, a labour economist at Harvard University. But for inexperienced programmers, the result of introducing AI can resemble the shift from artisanal work to factory work in the 19th and 20th centuries.
The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition.
For years, many workers at Amazon warehouses walked miles each day to track down inventory. But over the past decade, Amazon has increasingly relied on so-called robotics warehouses, where pickers stand in one spot and pull inventory off shelves delivered to them by lawn-mower-like robots, no walking necessary.
The robots generally haven't displaced humans; Amazon said it has hired hundreds of thousands of warehouse workers since their introduction, while creating many new skilled roles. But the robots have increased the number of items each worker can pick from dozens an hour to hundreds. Some workers complain that the robots have also made the job hyper-repetitive and physically taxing. Amazon says it provides regular breaks and cites positive feedback from workers about its cutting-edge robots.
The Amazon engineers said this transition was on their minds as the company urged them to rely more on AI. They said that while doing so was technically optional, they had little choice if they wanted to keep up with their output goals, which affect their performance reviews.
One Amazon engineer said that building a feature for the website used to take a few weeks; now it must frequently be done within a few days. He said this is possible only by using AI to help automate the coding and by cutting down on meetings to solicit feedback and explore alternative ideas.
The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work.
'It used to be that you had a lot of slack because you were doing a complicated project – it would maybe take a month, maybe take two months, and no one could monitor it,' Mr Katz said. 'Now, you have the whole thing monitored, and it can be done quickly.'
Amid their frustration, many Amazon engineers have joined a group called Amazon Employees for Climate Justice, which is pressuring the company to reduce its carbon footprint and has become a clearinghouse for workers' anxieties about other issues, like return-to-office mandates.
The group's organisers say they are in touch with several hundred Amazon employees on a regular basis and that the workers increasingly discuss the stress of using AI on the job, in addition to the effect that the technology has on the climate.
The complaints have centered around 'what their careers are going to look like,' said Eliza Pan, a former Amazon employee who is a representative for the group. 'And not just their careers but the quality of the work.' NYTIMES
Join ST's Telegram channel and get the latest breaking news delivered to you.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
2 hours ago
- CNA
Tax bill contains 'sledgehammer' for Trump to retaliate against foreign digital taxes
WASHINGTON :U.S. President Donald Trump would have the power to retaliate against countries that impose special digital service taxes on large U.S. technology companies like Amazon and Alphabet, under a provision in the sweeping tax bill that Congress is considering. "If foreign countries want to come in the United States and tax US businesses, then those foreign-based businesses ought to be taxed as well," said Representative Ron Estes, a Kansas Republican who helped craft the provision. Some 17 countries in Europe and others around the world impose or have announced such taxes on U.S. tech products like Meta's Instagram. Germany announced on Thursday it was considering a 10 per cent tax on platforms like Google. The levies have drawn bipartisan ire in Washington. Democrats who oppose much of the tax bill have not spoken out against the retaliatory tax provision, found in Section 899 of the 1,100-page bill. Trump has been pressing foreign countries to lower barriers to U.S. commerce. Under the bill, Congress would empower his administration to impose tax hikes on foreign residents and companies that do business in the U.S. The U.S. Constitution gives Congress, not the president, the power to decide on taxes and spending. The provision could raise $116 billion over the next decade, according to the Joint Committee on Taxation. But some experts warned that an unintended consequence of retaliatory taxes could be less foreign investment in the U.S. "This new Section 899 provision brings a sledgehammer to the idea that the United States will allow itself to be characterized as a tax haven by anyone," said Peter Roskam, former Republican congressman and head of law firm Baker Hostetler's federal policy team. The House of Representatives narrowly passed the bill on May 22, and it now heads to the Senate. Democrats broadly oppose the Republicans' tax and spending bill, which advances many of Trump's top priorities such as an immigration crackdown, extending Trump's 2017 tax cuts and ending some green energy incentives. Section 899 would allow the Treasury Department to label the foreign tech taxes "unfair" and place the country in question on a list of "discriminatory foreign countries." Some other foreign taxes also would be subject to scrutiny. Once on the list, a country's individuals and its companies that operate in the U.S. could face stiffer tax rates that could increase each year, up to 20 per centage points. Joseph Wang, chief investment officer at Monetary Macro, said Section 899 could help Trump reduce trade imbalances because if foreign investment decreases it could depreciate the U.S. dollar. This in turn could spur exports of U.S. products by making them cheaper overseas. Portfolio interest would remain exempt from any tax Trump imposes, but some experts cautioned that taxing foreigners could quell foreign investment in the U.S. "Foreign investors may change their behavior to avoid the taxes in various ways, including potentially by simply investing elsewhere," said Duncan Hardell, an advisor at New York University's Tax Law Center. PUSH BACK TO GLOBAL MINIMUM TAX The new approach follows the 15 per cent minimum global corporate tax deal negotiated by the administration of Democratic former President Joe Biden. Republicans, led by Representative Jason Smith of Missouri, chairman of the House tax committee, opposed that approach, arguing it unfairly benefits Chinese companies. Foreign countries have invoked that global minimum to slap higher taxes on U.S. tech firms, if they concluded that generous U.S. tax credits for research and development pushed their tax burden below that 15 per cent threshold. Trump in February directed his administration to combat foreign digital taxes, but they were not addressed in the trade deal announced in May between the U.S. and the United Kingdom, which imposes a 2 per cent levy on foreign digital services. It was unclear if the Treasury Department would actually use the new authority if it becomes law, or if the mere threat of action would convince other countries to change course. The department did not share its intended strategy when asked.
Business Times
3 hours ago
- Business Times
Collaborations from security to training among 13 agreements signed by Singapore and France
[SINGAPORE] Singapore and France exchanged 13 agreements on Friday (May 30) – spanning areas such as general security, artificial intelligence (AI), civil aviation and vocational training – in conjunction with their new Comprehensive Strategic Partnership. Defence is one key focus of the bilateral relationship. The France-Singapore General Security Agreement will facilitate the sharing and mutual protection of classified information, and deepen cooperation in areas including counterterrorism and cybersecurity. An annual dialogue was established for national security matters of mutual interest. Singapore and France also declared intent to expand security cooperation to non-traditional domains such as digital defence, and explore new defence areas such as in critical and emerging technologies. They will also enhance a joint lab to develop AI capabilities for defence applications. Another key area in the bilateral relationship is energy, with Singapore hoping to build capacity in the field of civilian nuclear energy, and deploy it domestically. Besides providing a framework for cooperation, which encompasses topics including safeguards and workforce development, the agreement affirms Singapore and France's commitments to international non-proliferation obligations. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up The countries' relevant agencies will also cooperate in training, personnel exchanges and attachments, information sharing and consultancy studies. In the education space, Singapore and France will strengthen technical and vocational education and training, with more extensive institutional partnerships and exchanges. A road map for deepening digital policy and domain innovation sets out collaboration in new areas, including AI, quantum technologies and cybersecurity. Similarly, a joint statement sets out how the countries' AI Safety Institutes will test, research and exchange knowledge to advance the responsible use of AI. On the legal front, an extradition agreement inked on Friday provides a framework for Singapore to extradite fugitives to France, and in turn request the extradition of fugitives from France to Singapore, for certain offences. Meanwhile, under a comprehensive civil aviation framework agreement, France and Singapore will hold annual dialogues and jointly deliver a training programme in air transport management. Finally, a maritime partnership will support the industry's collaboration from knowledge exchange on decarbonisation to cybersecurity. To implement it, the countries' maritime authorities will conduct pilot projects in key focus areas and encourage participation in open innovation platforms.

Straits Times
9 hours ago
- Straits Times
ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth
(From left) IMDA's Alamelu Subramaniam, Adobe's Andy Parsons, Baroness Jones of Whitchurch, Meta's Simon Milner and SMU's Lim Sun Sun during a discussion at ATxSummit 2025 on May 29. PHOTO: INFOCOMM MEDIA DEVELOPMENT AUTHORITY ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth SINGAPORE – Meta, the parent company of Facebook and Instagram, downplayed fears over the impact of artificial intelligence (AI), urging policymakers and the public to focus on actual outcomes rather than worst-case scenarios. The comments by its Asia-Pacific public policy vice-president Simon Milner drew sharp rebuttals at the ATxSummit 2025 on May 29, where fellow panellists said the rapid spread of AI has real-world consequences such as online harms affecting youth and children. During the panel at Capella Singapore, Mr Milner cited 2024 as the 'year of democracy', as more people across a bigger number of countries went to the polls than at any other time in history. While there were widespread concerns about deepfakes and generative AI (GenAI) disrupting elections, he said no significant evidence of such interference was found – not even in major democracies like the US, India or Indonesia. 'Although enormous amounts of GenAI were deployed across platforms, the impact has not been catastrophic,' he added. However, his views were not shared by fellow panellists discussing the topic of protecting society in an always-online world. Drawing from her work, Singapore Management University's professor of communication and technology Lim Sun Sun said many parents feel anxious and unsure about how to guide their children in navigating the rapid rise of GenAI. 'Even if the data doesn't paint a worrying picture overall, on the ground, people are struggling to understand this technology,' Prof Lim said. Teachers also face a dilemma: Encouraging experimentation with AI while warning about its risks. 'It is a difficult balance,' she added. Baroness Jones of Whitchurch (Margaret Beryl Jones) , the UK's parliamentary under-secretary for the future digital economy and online safety, echoed similar concerns about online harms affecting youth and children. She pointed to an ongoing public debate in the UK about the damaging effects some online platforms have on young users. 'For example, children accessing online suicide forums and committing suicide. This is just heartbreaking, and we have some terrible stories about it,' she said. In May 2024, 17-year-old Vlad Nikolin-Caisley from Hampshire in south-east England died after allegedly being encouraged by members of an online pro-suicide group. His family believes these harmful online interactions played a significant role in his death, intensifying calls for stronger regulation of such platforms. Baroness Jones stressed the need for tech companies to work closely with the government to minimise such harms, but acknowledged that not all companies are fully on board, as the government is 'laying high expectations in a new territory'. But Mr Milner pushed back, arguing that the UK – or more broadly, Europe – rushed to be the first region to regulate AI, which he described as a mistake. He said this approach has led to a stand-off with companies. In contrast, he praised Singapore and other Asian governments for taking a different path: Fostering robust dialogue with tech firms, both publicly and privately, while asking tough questions without rushing into heavy-handed regulations. Mr Andy Parsons, senior director of content authenticity at Adobe, highlighted the spread of child sexual abuse material (CSAM) online. It is becoming nearly impossible for the police to identify real victims if the materials were generated entirely by AI, he said. Mr Parsons warned that this not only hinders efforts to bring perpetrators to justice but also erases the real human suffering behind these crimes – a grave problem that requires urgent attention. Prof Lim agreed, noting that the issue of CSAM has been worsened by the rapid spread of GenAI. She is currently identifying key stakeholders across the industry, government and the community who are involved in tackling the problem . We need to understand 'where else can we coordinate our efforts better so that we can combat this really dreadful scourge', she said. Addressing the concerns raised by his fellow panellists, Mr Milner emphasised that Meta's top priority is developing products with features to mitigate online harms. He cited the introduction of teen-specific accounts on Instagram as a response to growing worries about young people's engagement with the platform. 'I think we should be more parent-focused in our approach to young people's safety,' he said, adding that teen accounts are not just about imposing bans. 'Parents want help, and we are here to help them.' Baroness Jones stressed that AI safety must be approached as safety by design – embedded into platforms from the outset, rather than relying on reactive measures like taking down the content afterwards. 'It should be an integral part of the system that children, in particular, are protected,' she said. But achieving that remains a major challenge. Citing reports from the UK, she highlighted that children as young as eight have encountered disturbing content online, often repeatedly surfaced to them by algorithms. She believed the algorithms are clearly reinforcing exposure to harmful material. If tech companies truly put their minds to it, they could rework the way these systems operate, she said, emphasising that keeping children safe must be the top priority. Prof Lim also called for safety by design, stressing that online spaces should be built with the most vulnerable users in mind – whether they are children, women, the elderly or marginalised communities. She said: 'Because once you've designed for the most vulnerable, it makes the whole platform safer for everyone.' Join ST's WhatsApp Channel and get the latest news and must-reads.