Latest news with #Time.com


Time Magazine
4 days ago
- Science
- Time Magazine
AI Promised Faster Coding. This Study Disagrees
Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know:Could coding with AI slow you down? In just the last couple of years, AI has totally transformed the world of software engineering. Writing your own code (from scratch, at least,) has become quaint. Now, with tools like Cursor and Copilot, human developers can marshal AI to write code for them. The human role is now to understand what to ask the models for the best results, and to iron out the inevitable problems that crop up along the way. Conventional wisdom states that this has accelerated software engineering significantly. But has it? A new study by METR, published last week, set out to measure the degree to which AI speeds up the work of experienced software developers. The results were very unexpected. What the study found — METR measured the speed of 16 developers working on complex software projects, both with and without AI assistance. After finishing their tasks, the developers estimated that access to AI had accelerated their work by 20% on average. In fact, the measurements showed that AI had slowed them down by about 20%. The results were roundly met with surprise in the AI community. 'I was pretty skeptical that this study was worth running, because I thought that obviously we would see significant speedup,' wrote David Rein, a staffer at METR, in a post on X. Why did this happen? — The simple technical answer seems to be: while today's LLMs are good at coding, they're often not good enough to intuit exactly what a developer wants and answer perfectly in one shot. That means they can require a lot of back and forth, which might take longer than if you just wrote the code yourself. But participants in the study offered several more human hypotheses, too. 'LLMs are a big dopamine shortcut button that may one-shot your problem,' wrote Quentin Anthony, one of the 16 coders who participated in the experiment. 'Do you keep pressing the button that has a 1% chance of fixing everything? It's a lot more enjoyable than the grueling alternative.' (It's also easy to get sucked into scrolling social media while you wait for your LLM to generate an answer, he added.) What it means for AI — The study's authors urged readers not to generalize too broadly from the results. For one, the study only measures the impact of LLMs on experienced coders, not new ones, who might benefit more from their help. And developers are still learning how to get the most out of LLMs, which are relatively new tools with strange idiosyncrasies. Other METR research, they noted, shows the duration of software tasks that AI is able to do doubling every seven months—meaning that even if today's AI is detrimental to one's productivity, tomorrow's might not be. Who to Know:Jensen Huang, CEO of Nvidia Huang finds himself in the news today after he proclaimed on CNN that the U.S. government doesn't 'have to worry' about the possibility of the Chinese military using the market-leading AI chips that his company, Nvidia, produces. 'They simply can't rely on it,' he said. 'It could be, of course, limited at any time.' Chipping away — Huang was arguing against policies that have seen the U.S. heavily restrict the export of graphics processing units, or GPUs, to China, in a bid to hamstring Beijing's military capabilities and AI progress. Nvidia claims that these policies have simply incentivized China to build its own rival chip supply chain, while hurting U.S. companies and by extension the U.S. economy. Self-serving argument — Huang of course would say that, as CEO of a company that has lost out on billions as a result of being blocked from selling its most advanced chips to the Chinese market. He has been attempting to convince President Donald Trump of his viewpoints in a recent meeting at the White House, Bloomberg reported. In fact… The Chinese military does use Nvidia chips, according to research by Georgetown's Center for Security and Emerging Technology, which analyzed 66,000 military purchasing records to come to that conclusion. A large black market has also sprung up to smuggle Nvidia chips into China since the export controls came into place, the New York Times reported last year. AI in Action Anthropic's AI assistant, Claude, is transforming the way the company's scientists keep up with the thousands of pages of scientific literature published every day in their field. Instead of reading papers, many Anthropic researchers now simply upload them into Claude and chat with the assistant to distill the main findings. 'I've changed my habits of how I read papers,' Jan Leike, a senior alignment researcher at Anthropic, told TIME earlier this year. 'Where now, usually I just put them into Claude, and ask: can you explain?' To be clear, Leike adds, sometimes Claude gets important stuff wrong. 'But also, if I just skim-read the paper, I'm also gonna get important stuff wrong sometimes,' Leike says. 'I think the bigger effect here is, it allows me to read much more papers than I did before.' That, he says, is having a positive impact on his productivity. 'A lot of time when you're reading papers is just about figuring out whether the paper is relevant to what you're trying to do at all,' he says. 'And that part is so fast [now], you can just focus on the papers that actually matter.' What We're Reading Microsoft and OpenAI's AGI Fight Is Bigger Than a Contract — By Steven Levy in Wired Steven Levy goes deep on the 'AGI' clause in the contract between OpenAI and Microsoft, which could decide the fate of their multi-billion dollar partnership. It's worth reading to better understand how both sides are thinking about defining AGI. They could do worse than Levy's own description: 'a technology that makes Sauron's Ring of Power look like a dime-store plastic doodad.'


Time Magazine
11-07-2025
- Business
- Time Magazine
A Blueprint for Redistributing AI's Profits
Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know Let's say, sometime in the next few years, artificial intelligence automates most of the jobs that humans currently do. If that happens, how can we avoid societal collapse? This question, once the stuff of science fiction, is now very real. In May, Anthropic CEO Dario Amodei warned that AI could wipe out half of all entry-level white collar jobs in the next one to five years, and send unemployment rising to up to 20%. In response to such a prediction, you might expect states to begin seriously drawing up contingency plans. Not so. But a growing group of academic economists are working on this question. A new paper, published this Tuesday, suggests a novel way for states to protect their populations in a world of mass AI-enabled job loss. Sovereign wealth funds — The paper recommends that states invest in AI-related industries via sovereign wealth funds, the same type of financial vehicles that have allowed the likes of Norway and the United Arab Emirates to diversify their oil wealth. This isn't strictly a new idea. In fact, the UAE has already been investing billions of dollars from its sovereign wealth funds into AI. Nvidia, the semiconductor company, has been urging states to invest in 'sovereign AI' for years now. But unlike those examples, which are focused on yielding as big a return on investment as possible, or exerting geopolitical influence over AI, the paper lays out the social reasons that this might be a good idea. AI welfare state — The paper argues that if transformative AI is around the corner, the ability of states to economically provide for their citizens may be directly tied to how exposed they are to AI's upside. 'Such investments can be seen as part of the state's responsibility to safeguard public welfare in the face of disruptive technological change,' the paper argues. The returns on investment could be used to fund universal basic income, or a 'stabilization fund' that could allow states to 'absorb shocks, redistribute benefits, and support displaced workers in real time.' Reasons to be skeptical — To be sure, this approach has risks. States investing billions in AI could paradoxically accelerate the very job-automation trends that they're seeking to mitigate. On the flipside, if AI turns out to be less transformative than expected, piling in at the top of the market could bring losses. And just like retail investing, any potential upside is correlated with how much money you have available in the first place. Rich countries will have the opportunity to become richer; poor countries will struggle to participate at all. 'As [transformative AI]-generated wealth risks deepening global inequality, it may also be necessary to explore new models for transnational benefit-sharing,' the paper notes, 'including internationally coordinated sovereign investment vehicles that allocate a portion of AI-derived returns toward global public goods.' Who to Know Person in the news – Elon Musk, owner of xAI Late Wednesday night, with the launch of Grok 4, the AI race got a new leader: Elon Musk. At least, that's if you believe the benchmarks, which show Grok trouncing competition from OpenAI, Google and Anthropic on some of the industry's most difficult tests. On ARC AGI 2, a benchmark designed to be easy for humans but difficult for AIs, Grok 4's reasoning mode scored 16.2%—nearly double that of its closest contender (Claude Opus 4 by Anthropic). Unexpected result — Musk's xAI has not traditionally been seen as a 'frontier' AI company, despite its huge cache of GPU chips. Previous releases of Grok delivered middling performance. And just a day before Grok 4's release, an earlier version of the chatbot had a meltdown on X, repeatedly referring to itself as 'MechaHitler' and sharing violent rape fantasies. (The posts were later deleted.) Episodes like this had encouraged hopes, in at least some corners of the AI world, that the far-right billionaire's attempts to make his bot more 'truth-seeking' were actually making it more stupid. Musk on Grok and AGI — On a livestream broadcast on X on Wednesday night, Musk said Grok 4 had been trained using 10 times as much computing power as Grok 3. 'With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,' he said. On the subject of artificial general intelligence, he said: "Will this be bad or good for humanity? I think it'll be good. Most likely it'll be good. But I've somewhat reconciled myself to the fact that even if it wasn't gonna be good, I'd at least like to be alive to see it happen." What Musk does best — If Grok 4's benchmark scores are borne out, it would mean that Musk's core skill — spinning up cracked engineering teams that are blindly focused on a single goal — is applicable to the world of AI. That will worry those in the industry who care about not just developing AI quickly, but also doing so safely. As the MechaHitler debacle showed, neither Musk nor anybody else yet knows how to prevent current AI systems from going badly out of control. 'If you can't prevent your AI from endorsing Hitler,' says expert forecaster Peter Wildeford, 'how can we trust you with ensuring far more complex future AGI can be deployed safely?' AI in Action Where is AI? Last year, I wrote about a group of researchers who had attempted to answer that question. Now, the team at Epoch AI have gone one step further: they've built an interactive map of more than 500 AI supercomputers, to track exactly where the world's major AI infrastructure is located. The map confirms what I wrote last year: AI compute is concentrated in rich countries, with the vast majority in the US and China, followed by Europe and the Persian Gulf. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What We're Reading They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling by Kashmir Hill in the New York Times Large language models are mysterious, shapeshifting artifacts. Their creators train them to adopt helpful personas—but sometimes these personas can slip, revealing a darker side that can lead some vulnerable users down conspiratorial rabbit holes. That tendency was especially stark earlier this year, when OpenAI shipped an update to ChatGPT that inadvertently caused the bot to become more sycophantic—meaning it would egg-on almost anything a user said, delusional or not. Kashmir Hill, one of the best reporters in the business, spoke to many users who experienced this behavior, and found some shocking personal stories… including one that turned out to be fatal.


Time Magazine
01-07-2025
- Science
- Time Magazine
In the Loop: Is AI Making the Next Pandemic More Likely?
Welcome back to In the Loop, TIME's new twice-weekly newsletter about AI. Starting today, we'll be publishing these editions both as stories on and as emails. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Subscribe to In the Loop What to Know If you talk to staff at the top AI labs, you'll hear a lot of stories about how the future could go fantastically well—or terribly badly. And of all the ways that AI might cause harm to the human race, there's one that scientists in the industry are particularly worried about today. That's the possibility of AI helping bad actors to start a new pandemic. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Anthropic's chief scientist, Jared Kaplan, told me in May. Measuring the risk — In a new study published this morning, and shared exclusively with TIME ahead of its release, we got the first hard numbers on how experts think the risk of a new pandemic might have increased thanks to AI. The Forecasting Research Institute polled experts earlier this year, asking them how likely a human-caused pandemic might be—and how likely it might become if humans had access to AI that could reliably give advice on how to build a bioweapon. What they found — Experts, who were polled between December and February, put the risk of a human-caused pandemic at 0.3% per year. But, they said, that risk would jump fivefold, to 1.5% per year, if AI were able to provide human-level virology advice. You can guess where this is going — Then, in April, the researchers tested today's AI tools on a new virology troubleshooting benchmark. They found that today's AI tools outperform PhD-level virologists at complex troubleshooting tasks in the lab. In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. We just published the full story on can read it here. Who to Know Person in the news – Matthew Prince, CEO of Cloudflare. Since its founding in 2009, Cloudflare has been protecting sites on the internet from being knocked offline by large influxes of traffic, or indeed coordinated attacks. Now, some 20% of the internet is covered by its network. And today, Cloudflare announced that this network would begin to block AI crawlers by default — essentially putting a fifth of the internet behind a paywall for the bots that harvest info to train AIs like ChatGPT and Claude. Step back — Today's AI is so powerful because it has essentially inhaled the whole of the internet — from my articles to your profile photos. By running neural networks over that data using immense quantities of computing power, AI companies have taught these systems the texture of the world at such an enormous scale that it has given rise to new AI capabilities, like the ability to answer questions on almost any topic, or to generate photorealistic images. But this scraping has sparked a huge backlash from publishers, artists and writers, who complain that it has been done without any consent or compensation. A new model — Cloudflare says the move will 'fundamentally change how AI companies access web content going forward.' Major publishers, including TIME, have expressed their support for the shift toward an 'opt-in' rather than an 'opt-out' system, the company says. Cloudflare also says it is working on a new initiative, called Pay Per Crawl, in which creators will have the option of setting a price on their data in return for making it available to train AI. Fighting words — Prince was not available for an interview this week. But at a recent conference, he disclosed that traffic to news sites had dropped precipitously across the board thanks to AI, in a shift that many worry will imperil the existence of the free press. 'I go to war every single day with the Chinese government, the Russian government, the Iranians, the North Koreans, probably Americans, the Israelis — all of them who are trying to hack into our customer sites,' Prince said. 'And you're telling me I can't stop some nerd with a C-corporation in Palo Alto?' AI in Action 61% percent of U.S. adults have used AI in the last six months, and 19% interact with it daily, according to a new survey of AI adoption by the venture capital firm Menlo Ventures. But just 3% percent of those users pay for access to the software, Menlo estimated based on the survey's results—suggesting 97% of users only use the free tier of AI tools. AI usage figures are higher for Americans in the workforce than other groups. Some 75% of employed adults have used AI in the last six months, including 26% who report using it daily, according to the survey. Students also report high AI usage: 85% have used it in the last six months, and 22% say they use it daily. The statistics seem to suggest that some students and workers are growing dependent on free AI tools—a usage pattern that might become lucrative if AI companies were to begin restricting access or raising prices. However, the proliferation of open-source AI models has created intense price competition that may limit any single company's ability to dramatically increase their costs. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What we're reading 'The Dead Have Never Been This Talkative': The Rise of AI Resurrection by Tharin Pillay in TIME With the rise of image-to-video tools like the newest version of Midjourney, the world recently crossed a threshold: it's now possible, in just a few clicks, to reanimate a photo of your dead relative. You can train a chatbot on snippets of their writing to replicate their patterns of speech; if you have a long enough clip of them speaking, you can also replicate their voice. Will these tools make it easier to process the heart-rending pain of bereavement? Or might their allure in fact make it harder to move forward? My colleague Tharin published a deeply insightful piece last week about the rise of this new technology. It's certainly a weird time to be alive. Or, indeed, to be dead. Subscribe to In the Loop


Hindustan Times
07-05-2025
- Politics
- Hindustan Times
10 surprising facts about Pope conclave, the most secretive election in the world
The Vatican City is bracing itself to conduct the Papal conclave on Wednesday, May 7, to choose the new leader for the world's 1.3 billion Catholic population after the death of 88-year-old Pope Francis. St. Peters Square in Vatican City where visitors will await the news on the new Pope(Bloomberg) The secret Conclave will start with centuries-old rituals of sacred oath by participating cardinals and piercing of ballots while the outside world awaits white smoke to billow out of the Sistine Chapel's chimney. The longest conclave ever was in the year 1268 when the conclave lasted for 3 years – 1006 days. There is no limit on the time. The election involves an elaborate process of taking oaths by cardinals and marking their ballots 4-times each day till the two-thirds majority agrees on a single name. Oath of secrecy Before entering the elections, each participating cardinal is sworn to secrecy that they will not reveal what went on inside. Anyone found engaging with any kind of audio or visual recording device or found guilty of accepting money in return for their vote faces automatic excommunication. No communication with the outside world In an attempt to stop the outside world from influencing the Conclave, the communication from the outside world is completely cut off till the new Pope is selected. Windows tainted black, jammers in place Before the beginning of the voting ceremony, the technicians install powerful jammers inside the premises of the Sistine Chapel and 'black out the window' overlooking the areas where the elections will take place. Food that could hide messages is banned During the conclave, food like pie, chickens, etc. that could conceal messages of any kind is prohibited inside the Sistine Chapel – where the elections are conducted. So what do cardinals eat? As per the traditions, the Nuns will be preparing local foods such as spaghetti, lamb, and boiled vegetables. Black and White smoke will tell To communicate to the outside world that the next Pope has been selected, the chimney will emit white smoke. If that voting round remains inconclusive the chimney will billow black smoke. To make the signals even clearer bells are rung when the cardinals successfully choose the next leader. Cardinals over 80 can't vote Out of all the cardinals in the world, only ones below the age of 80 are eligible to vote. Currently, a total of 133 cardinals are set to take part in the 2025 papal conclave. Pope does not need to be a cardinal All the Popes elected till now have been the Cardinals, however, that's not a necessity. Any baptized catholic male can be made the Pope. The last non-cardinal to assume the role of Pope was Urban VI in 1378, as per Thousands of spectators arrive in Rome The Vatican City is expecting hundreds of visitors in St. Peter's Square who will eagerly await the white smoke to come out of the chimney. According to Forbes, flight searches from the US to Rome have seen a surge of 345% after the death of Pope Francis. The papal conclave has inspired award-winning film The movie Conclave (2024), was based on a 2016 novel of the same name. The film went on the receive an Academy Award for the Best Adapted Screenplay. (With agency inputs)


Time of India
27-04-2025
- Entertainment
- Time of India
The devastating true story behind A Tragedy Foretold: Flight 3054 on Netflix: What led to the crash?
A few days ago, A Tragedy Foretold: Flight 3054 was released on the streaming service Netflix. Since then, it has generated a lot of online discussion among viewers who are curious about the event that the documentary series is based on. The tragic incident that occurred on July 17, 2007, in São Paulo, Brazil, is revisited in the documentary series. It is regarded as one of the worst aviation disasters in Brazilian history. Here's all you need to know about the true story behind A Tragedy Foretold: Flight 3054. The true story behind A Tragedy Foretold: Flight 3054 on Netflix In addition to the tragic effects on the families of the victims, the three-episode documentary series examines the changes that the incident brought about in the Brazilian aviation industry. The production thoroughly examines each breakdown in the series of events that resulted in the disaster. The plane, a TAM-operated Airbus A320, did not make a proper landing at Congonhas Airport in São Paulo on the day of the terrible disaster. As per it went over the runway and hit a gas station on Washington Luís Avenue and a TAM building, exploding. At that time, Brazil experienced what became known as the "aviation blackout" between 2006 and 2007, a crisis in the country's civil aviation industry that affected millions of passengers through severe delays, flight cancellations, and more. How many people died? Along with 12 members of the ground staff, the flight carried 187 passengers and crew members who passed away. According to an investigation, the accident was caused by the awful disregard for basic safety precautions. Families of the victims remember the day of the disaster and the traumatic wait to identify the bodies in the documentary series. Because some victims' bodies were totally destroyed in the collision, some families were unable to bury their loved ones. A Tragedy Foretold: Flight 3054 is now streaming on Netflix.