Latest news with #AIs


Time Magazine
2 days ago
- Time Magazine
Is 'Sweatshop Data' Really Over?
Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know: The future of 'sweatshop data' You can measure time in the world of AI by the cadence of new essays with provocative titles. Another one arrived earlier this month from the team at Mechanize Work: a new startup that is trying to, er, automate all human labor. Its title? 'Sweatshop data is over.' This one caught my attention. As regular readers may know, I've done a lot of reporting over the years on the origins of the data that is used to train AI systems. My story 'Inside Facebook's African Sweatshop' was the first to reveal how Meta used contractors in Kenya, some earning as little as $1.50 per hour, to remove content from their platforms—content that would later be used in attempts to train AI systems to do that job automatically. I also broke the news that OpenAI used workers from the same outsourcing company to detoxify ChatGPT. In both cases, workers said the labor left them with diagnoses of post-traumatic stress disorder. So if sweatshop data really is a thing of the past, that would be a very big deal indeed. What the essay argues — Mechanize Work's essay points to a very real trend in AI research. To summarize: AI systems used to be relatively unintelligent. To teach them the difference between, say, a cat and a dog, you'd need to give them lots of different labeled examples of cats and dogs. The most cost-effective way to get those labels was from the Global South, where labor is cheap. But as AI systems have gotten smarter, they no longer need to be told basic information, the authors argue. AI companies are now desperately seeking expert data, which necessarily comes from people with PhDs—and who won't put up with poverty wages. 'Teaching AIs these new capabilities will require the dedicated efforts of high-skill specialists working full-time, not low-skill contractors working at scale,' the authors argue. A new AI paradigm — The authors are, in one important sense, correct. The big money has indeed moved toward expert data. A clutch of companies, including Mechanize Work, are jostling to be the ones to dominate the space, which could eventually be worth hundreds of billions of dollars, according to insiders. Many of them aren't just hiring experts, but are also building dedicated software environments to help AI learn from experience at scale, in a paradigm known as reinforcement learning with verifiable rewards. It takes inspiration from DeepMind's 2017 model AlphaZero, which didn't need to observe humans playing chess or Go, and instead became superhuman just by playing against itself millions of times. In the same vein, these companies are trying to build software that would allow AI to 'self-play,' with the help of experts, on questions of coding, science, and math. If they can get that to work, it could potentially unlock major new leaps in capability, top researchers believe. There's just one problem — While all of this is true, it does not mean that sweatshop data has gone away. 'We don't observe the workforce of data workers, in the classical sense, decreasing,' says Milagros Miceli, a researcher at the Weizenbaum Institute in Berlin who studies so-called sweatshop data. 'Quite the opposite.' Meta and TikTok, for example, still rely on thousands of contractors all over the world to remove harmful content from their systems—a task that has stubbornly resisted full AI automation. Other types of low-paid tasks, typically carried out in places like Kenya, the Philippines, and India, are booming. 'Right now what we are seeing is a lot of what we call algorithmic verification: people checking in on existing AI models to ensure that they are functioning according to plan,' Miceli says. 'The funny thing is, it's the same workers. If you talk to people, they will tell you: I have done content moderation. I have done data labeling. Now I am doing this.' Who to Know: Shengjia Zhao, Chief Scientist, Meta Superintelligence Labs Mark Zuckerberg promoted AI researcher Shengjia Zhao to chief scientist of the new effort inside Meta to create 'superintelligence.' Zhao joined Meta last month from OpenAI, where he worked on the o1-mini and o3-mini models. Zuck's memo — In a note to staff on Saturday, Zuckerberg wrote: 'Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field.' Zhao, who studied for his undergraduate degree in Beijing and graduated from Stanford with a PhD in 2022, 'will set the research agenda and scientific direction for our new lab,' Zuckerberg wrote. Meta's recruiting push — Zuckerberg has ignited a fierce war for talent in the AI industry by offering top AI researchers pay packages worth up to $300 million, according to reports. 'I've lost track of how many people from here they've tried to get,' Sam Altman told OpenAI staff in a Slack message, according to the Wall Street Journal. Bad news for LeCun — Zhao's promotion is yet another sign that Yann LeCun—who until the hiring blitz this year was Meta's most senior AI scientist—has been put out to pasture. A notable critic of the idea that LLMs will scale to superintelligence, LeCun's views appear to be increasingly at odds with Zuckerberg's bullishness. Meta's Superintelligence team is clearly now a higher priority for Zuckerberg than the separate group LeCun runs, called Facebook AI Research (FAIR). In a note appended to his announcement of Zhao's promotion on Threads, Zuckerberg denied that LeCun had been sidelined. 'To avoid any confusion, there's no change in Yann's role,' he wrote. 'He will continue to be Chief Scientist for FAIR.' AI in Action One of the big ways AI is already affecting our world is in the changes it's bringing to our information ecosystem. News publishers have long complained that Google's 'AI Overviews' in its search results have reduced traffic, and therefore revenues, harming their ability to employ journalists and hold the powerful to account. Now we have new data from the Pew Research Center that puts that complaint into stark relief. When AI summaries are included in search results, only 8% of users click through to a link — down from 15% without an AI summary, the study found. Just 1% of users clicked on any link in that AI summary itself, rubbishing the argument that AI summaries are an effective way of sending users toward publishers' content. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What We're Reading 'How to Save OpenAI's Nonprofit Soul, According to a Former OpenAI Employee,' by Jacob Hilton in TIME Jacob Hilton, who worked at OpenAI between 2018 and 2023, writes about the ongoing battle over OpenAI's legal structure—and what it might mean for the future of our world. 'The nonprofit still has no independent staff of its own, and its board members are too busy running their own companies or academic labs to provide meaningful oversight,' he argues. 'To add to this, OpenAI's proposed restructuring now threatens to weaken the board's authority when it instead needs reinforcing.'

Straits Times
11-07-2025
- Business
- Straits Times
AI slows down some experienced software developers, study finds
Sign up now: Get ST's newsletters delivered to your inbox The study found that using AI increased task completion time by 19 per cent. SAN FRANCISCO - Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24 per cent. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20 per cent. But the study found that using AI did the opposite: it increased task completion time by 19 per cent. The study's lead authors, Mr Joel Becker and Mr Nate Rush, said they were shocked by the results: prior to the study, Mr Rush had written down that he expected 'a 2x speed up, somewhat obviously'. The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development. AI is also expected to replace entry-level coding positions. Mr Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years. Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56 per cent, another study found developers were able to complete 26 per cent more tasks in a given time. But the new METR study shows that those gains don't apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown. Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study's authors said. The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,' Mr Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. 'Developers have goals other than completing the task as soon as possible,' Mr Becker said. 'So they're going with this less effortful route.' REUTERS


Time Magazine
01-07-2025
- Science
- Time Magazine
In the Loop: Is AI Making the Next Pandemic More Likely?
Welcome back to In the Loop, TIME's new twice-weekly newsletter about AI. Starting today, we'll be publishing these editions both as stories on and as emails. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Subscribe to In the Loop What to Know If you talk to staff at the top AI labs, you'll hear a lot of stories about how the future could go fantastically well—or terribly badly. And of all the ways that AI might cause harm to the human race, there's one that scientists in the industry are particularly worried about today. That's the possibility of AI helping bad actors to start a new pandemic. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Anthropic's chief scientist, Jared Kaplan, told me in May. Measuring the risk — In a new study published this morning, and shared exclusively with TIME ahead of its release, we got the first hard numbers on how experts think the risk of a new pandemic might have increased thanks to AI. The Forecasting Research Institute polled experts earlier this year, asking them how likely a human-caused pandemic might be—and how likely it might become if humans had access to AI that could reliably give advice on how to build a bioweapon. What they found — Experts, who were polled between December and February, put the risk of a human-caused pandemic at 0.3% per year. But, they said, that risk would jump fivefold, to 1.5% per year, if AI were able to provide human-level virology advice. You can guess where this is going — Then, in April, the researchers tested today's AI tools on a new virology troubleshooting benchmark. They found that today's AI tools outperform PhD-level virologists at complex troubleshooting tasks in the lab. In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. We just published the full story on can read it here. Who to Know Person in the news – Matthew Prince, CEO of Cloudflare. Since its founding in 2009, Cloudflare has been protecting sites on the internet from being knocked offline by large influxes of traffic, or indeed coordinated attacks. Now, some 20% of the internet is covered by its network. And today, Cloudflare announced that this network would begin to block AI crawlers by default — essentially putting a fifth of the internet behind a paywall for the bots that harvest info to train AIs like ChatGPT and Claude. Step back — Today's AI is so powerful because it has essentially inhaled the whole of the internet — from my articles to your profile photos. By running neural networks over that data using immense quantities of computing power, AI companies have taught these systems the texture of the world at such an enormous scale that it has given rise to new AI capabilities, like the ability to answer questions on almost any topic, or to generate photorealistic images. But this scraping has sparked a huge backlash from publishers, artists and writers, who complain that it has been done without any consent or compensation. A new model — Cloudflare says the move will 'fundamentally change how AI companies access web content going forward.' Major publishers, including TIME, have expressed their support for the shift toward an 'opt-in' rather than an 'opt-out' system, the company says. Cloudflare also says it is working on a new initiative, called Pay Per Crawl, in which creators will have the option of setting a price on their data in return for making it available to train AI. Fighting words — Prince was not available for an interview this week. But at a recent conference, he disclosed that traffic to news sites had dropped precipitously across the board thanks to AI, in a shift that many worry will imperil the existence of the free press. 'I go to war every single day with the Chinese government, the Russian government, the Iranians, the North Koreans, probably Americans, the Israelis — all of them who are trying to hack into our customer sites,' Prince said. 'And you're telling me I can't stop some nerd with a C-corporation in Palo Alto?' AI in Action 61% percent of U.S. adults have used AI in the last six months, and 19% interact with it daily, according to a new survey of AI adoption by the venture capital firm Menlo Ventures. But just 3% percent of those users pay for access to the software, Menlo estimated based on the survey's results—suggesting 97% of users only use the free tier of AI tools. AI usage figures are higher for Americans in the workforce than other groups. Some 75% of employed adults have used AI in the last six months, including 26% who report using it daily, according to the survey. Students also report high AI usage: 85% have used it in the last six months, and 22% say they use it daily. The statistics seem to suggest that some students and workers are growing dependent on free AI tools—a usage pattern that might become lucrative if AI companies were to begin restricting access or raising prices. However, the proliferation of open-source AI models has created intense price competition that may limit any single company's ability to dramatically increase their costs. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What we're reading 'The Dead Have Never Been This Talkative': The Rise of AI Resurrection by Tharin Pillay in TIME With the rise of image-to-video tools like the newest version of Midjourney, the world recently crossed a threshold: it's now possible, in just a few clicks, to reanimate a photo of your dead relative. You can train a chatbot on snippets of their writing to replicate their patterns of speech; if you have a long enough clip of them speaking, you can also replicate their voice. Will these tools make it easier to process the heart-rending pain of bereavement? Or might their allure in fact make it harder to move forward? My colleague Tharin published a deeply insightful piece last week about the rise of this new technology. It's certainly a weird time to be alive. Or, indeed, to be dead. Subscribe to In the Loop


Mint
23-06-2025
- Science
- Mint
Colleagues or overlords? The debate over AI bots has been raging but needn't
There's the Terminator school of perceiving artificial intelligence (AI) risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet's DeepMind—sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Also Read: AI didn't take the job. It changed what the job is. Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI 'accelerationists' largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs 'live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. 'Equality' is the sensitive part. Humans want to keep believing they are superior, not equal to machines. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur 'civilization extinction' remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. Also Read: You're absolutely right, as the AI chatbot says With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.


NDTV
19-06-2025
- Science
- NDTV
Opinion: Make the Robot Your Colleague, Not Overlord
There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders - including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind - sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists" largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades researching the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs "live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Well, kumbaya. Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines - especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. "Equality" is the sensitive part. Humans want to keep believing they are superior, not equal to the machines. His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the US has produced AI-inspired characters like the Terminator, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur "civilization extinction" remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring - whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritize independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fearmongering. There might be merit in viewing these machines as colleagues and not overlords.