logo
#

Latest news with #IntheLoop

Inside Trump's Long-Awaited AI Strategy
Inside Trump's Long-Awaited AI Strategy

Time​ Magazine

time6 days ago

  • Business
  • Time​ Magazine

Inside Trump's Long-Awaited AI Strategy

Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know: Trump's AI Action Plan President Trump will deliver a major speech on Wednesday at an event in Washington, D.C., titled 'Winning the AI Race,' where he is expected to unveil his long-awaited AI action plan. The 20-page, high-level document will focus on three main areas, according to a person with knowledge of the matter. It will come as a mixture of directives to federal agencies, with some grant programs. 'It's mostly carrots, not sticks,' the person said. Pillar 1: Infrastructure — The first pillar of the action plan is about AI infrastructure. The plan emphasizes the importance of overhauling permitting rules to ease the building of new data centers. It will also focus on the need to modernize the energy grid, including by adding new sources of power. Pillar 2: Innovation — Second, the action plan will argue that the U.S. needs to lead the world on innovation. It will focus on removing red tape, and will revive the idea of blocking states from regulating AI—although mostly as a symbolic gesture, since the White House's ability to tell states what to do is limited. And it will warn other countries against harming U.S. companies' ability to develop AI, the person said. This section of the plan will also encourage the development of so-called 'open-weights' AI models, which allow developers to download models, modify them, and run them locally. Pillar 3: Global influence —The third pillar of the action plan will emphasize the importance of spreading American AI around the world, so that foreign countries don't come to rely on Chinese models or chips. DeepSeek and other recent Chinese models could become a useful source of geopolitical leverage if they continue to be widely adopted, officials worry. So, part of the plan will focus on ways to ensure U.S. allies and other countries around the world will adopt American models instead. Who to Know: Michael Druggan, Former xAI Employee Elon Musk's xAI fired an employee who had welcomed the possibility of AI wiping out humanity in posts on X that drew widespread attention and condemnation. 'I would like to announce that I am no longer employed at xAI,' Michael Druggan, a mathematician who worked on creating expert datasets for training Grok's reasoning model, according to his resume, wrote on X. 'This separation comes as a result of things I posted on this account relating to my stance on AI philosophy.' What he said — In response to a post questioning why any super-intelligent AI would decide to cooperate with humans, rather than wiping them out, Druggan had written: 'It won't and that's OK. We can pass the torch to the new most intelligent species in the known universe.' When a commenter replied that he would prefer for his child to live, Druggan replied: 'Selfish tbh.' Druggan has identified himself in other posts as a member of the 'worthy successor' movement—a transhumanist group that believes humans should welcome their inevitable replacement by super-intelligent AI, and work to make it as intelligent and morally valuable as possible. X firestorm — The controversial posts were picked up by AI Safety Memes an X account. The account had in the preceding days sparred with Druggan over posts in which the X employee had defended Grok advising a user that they should assassinate a world leader if they wanted to get attention. 'This xAI employee is openly OK with AI causing human extinction,' the account wrote in a tweet that appears to have been noticed by Musk. After Druggan announced he was no longer employed at X, Musk replied to AI Safety Memes with a two-word post: 'Philosophical disagreements.' Succession planning — Druggan did not respond to a request for comment. But in a separate post, he clarified his views. 'I don't want human extinction, of course,' he wrote. 'I'm human and I quite like being alive. But, in a cosmic sense, I recognize that humans might not always be the most important thing.' AI in Action Last week we got another worrying insight into ChatGPT's ability to send users down delusional rabbit-holes—this time with perhaps the most high-profile individual yet. Geoff Lewis, a venture capitalist, posted on X screenshots of his chats with ChatGPT. 'I've long used GPT as a tool in pursuit of my core value: Truth,' he wrote. 'Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern.' The screenshots appear to show ChatGPT roleplaying a conspiracy theory-style scenario in which Lewis had discovered a secret entity known as 'Mirrorthread,' supposedly associated with 12 deaths. Some observers noted that the text's style appeared to mirror that of the community-written 'SCP' fan-fiction, and that it appeared Lewis had confused this roleplaying for reality. 'This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual,' Max Spero, CEO of a company focused on detecting 'AI slop,' wrote on X. Lewis did not respond to a request for comment. What We're Reading Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety A new paper coauthored by dozens of top AI researchers at OpenAI, DeepMind, Anthropic, and more, calls on companies to ensure that future AIs continue to 'think' in human languages, arguing that this is a 'new and fragile opportunity' to make sure AIs aren't deceiving their human creators. Current 'reasoning' models think in language, but a new trend in AI research of outcome-based reinforcement learning threatens to undermine this 'easy win' for AI safety. I found this paper especially interesting because it hit on a dynamic that I wrote about six months ago, here.

AI Promised Faster Coding. This Study Disagrees
AI Promised Faster Coding. This Study Disagrees

Time​ Magazine

time15-07-2025

  • Science
  • Time​ Magazine

AI Promised Faster Coding. This Study Disagrees

Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know:Could coding with AI slow you down? In just the last couple of years, AI has totally transformed the world of software engineering. Writing your own code (from scratch, at least,) has become quaint. Now, with tools like Cursor and Copilot, human developers can marshal AI to write code for them. The human role is now to understand what to ask the models for the best results, and to iron out the inevitable problems that crop up along the way. Conventional wisdom states that this has accelerated software engineering significantly. But has it? A new study by METR, published last week, set out to measure the degree to which AI speeds up the work of experienced software developers. The results were very unexpected. What the study found — METR measured the speed of 16 developers working on complex software projects, both with and without AI assistance. After finishing their tasks, the developers estimated that access to AI had accelerated their work by 20% on average. In fact, the measurements showed that AI had slowed them down by about 20%. The results were roundly met with surprise in the AI community. 'I was pretty skeptical that this study was worth running, because I thought that obviously we would see significant speedup,' wrote David Rein, a staffer at METR, in a post on X. Why did this happen? — The simple technical answer seems to be: while today's LLMs are good at coding, they're often not good enough to intuit exactly what a developer wants and answer perfectly in one shot. That means they can require a lot of back and forth, which might take longer than if you just wrote the code yourself. But participants in the study offered several more human hypotheses, too. 'LLMs are a big dopamine shortcut button that may one-shot your problem,' wrote Quentin Anthony, one of the 16 coders who participated in the experiment. 'Do you keep pressing the button that has a 1% chance of fixing everything? It's a lot more enjoyable than the grueling alternative.' (It's also easy to get sucked into scrolling social media while you wait for your LLM to generate an answer, he added.) What it means for AI — The study's authors urged readers not to generalize too broadly from the results. For one, the study only measures the impact of LLMs on experienced coders, not new ones, who might benefit more from their help. And developers are still learning how to get the most out of LLMs, which are relatively new tools with strange idiosyncrasies. Other METR research, they noted, shows the duration of software tasks that AI is able to do doubling every seven months—meaning that even if today's AI is detrimental to one's productivity, tomorrow's might not be. Who to Know:Jensen Huang, CEO of Nvidia Huang finds himself in the news today after he proclaimed on CNN that the U.S. government doesn't 'have to worry' about the possibility of the Chinese military using the market-leading AI chips that his company, Nvidia, produces. 'They simply can't rely on it,' he said. 'It could be, of course, limited at any time.' Chipping away — Huang was arguing against policies that have seen the U.S. heavily restrict the export of graphics processing units, or GPUs, to China, in a bid to hamstring Beijing's military capabilities and AI progress. Nvidia claims that these policies have simply incentivized China to build its own rival chip supply chain, while hurting U.S. companies and by extension the U.S. economy. Self-serving argument — Huang of course would say that, as CEO of a company that has lost out on billions as a result of being blocked from selling its most advanced chips to the Chinese market. He has been attempting to convince President Donald Trump of his viewpoints in a recent meeting at the White House, Bloomberg reported. In fact… The Chinese military does use Nvidia chips, according to research by Georgetown's Center for Security and Emerging Technology, which analyzed 66,000 military purchasing records to come to that conclusion. A large black market has also sprung up to smuggle Nvidia chips into China since the export controls came into place, the New York Times reported last year. AI in Action Anthropic's AI assistant, Claude, is transforming the way the company's scientists keep up with the thousands of pages of scientific literature published every day in their field. Instead of reading papers, many Anthropic researchers now simply upload them into Claude and chat with the assistant to distill the main findings. 'I've changed my habits of how I read papers,' Jan Leike, a senior alignment researcher at Anthropic, told TIME earlier this year. 'Where now, usually I just put them into Claude, and ask: can you explain?' To be clear, Leike adds, sometimes Claude gets important stuff wrong. 'But also, if I just skim-read the paper, I'm also gonna get important stuff wrong sometimes,' Leike says. 'I think the bigger effect here is, it allows me to read much more papers than I did before.' That, he says, is having a positive impact on his productivity. 'A lot of time when you're reading papers is just about figuring out whether the paper is relevant to what you're trying to do at all,' he says. 'And that part is so fast [now], you can just focus on the papers that actually matter.' What We're Reading Microsoft and OpenAI's AGI Fight Is Bigger Than a Contract — By Steven Levy in Wired Steven Levy goes deep on the 'AGI' clause in the contract between OpenAI and Microsoft, which could decide the fate of their multi-billion dollar partnership. It's worth reading to better understand how both sides are thinking about defining AGI. They could do worse than Levy's own description: 'a technology that makes Sauron's Ring of Power look like a dime-store plastic doodad.'

A Blueprint for Redistributing AI's Profits
A Blueprint for Redistributing AI's Profits

Time​ Magazine

time11-07-2025

  • Business
  • Time​ Magazine

A Blueprint for Redistributing AI's Profits

Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know Let's say, sometime in the next few years, artificial intelligence automates most of the jobs that humans currently do. If that happens, how can we avoid societal collapse? This question, once the stuff of science fiction, is now very real. In May, Anthropic CEO Dario Amodei warned that AI could wipe out half of all entry-level white collar jobs in the next one to five years, and send unemployment rising to up to 20%. In response to such a prediction, you might expect states to begin seriously drawing up contingency plans. Not so. But a growing group of academic economists are working on this question. A new paper, published this Tuesday, suggests a novel way for states to protect their populations in a world of mass AI-enabled job loss. Sovereign wealth funds — The paper recommends that states invest in AI-related industries via sovereign wealth funds, the same type of financial vehicles that have allowed the likes of Norway and the United Arab Emirates to diversify their oil wealth. This isn't strictly a new idea. In fact, the UAE has already been investing billions of dollars from its sovereign wealth funds into AI. Nvidia, the semiconductor company, has been urging states to invest in 'sovereign AI' for years now. But unlike those examples, which are focused on yielding as big a return on investment as possible, or exerting geopolitical influence over AI, the paper lays out the social reasons that this might be a good idea. AI welfare state — The paper argues that if transformative AI is around the corner, the ability of states to economically provide for their citizens may be directly tied to how exposed they are to AI's upside. 'Such investments can be seen as part of the state's responsibility to safeguard public welfare in the face of disruptive technological change,' the paper argues. The returns on investment could be used to fund universal basic income, or a 'stabilization fund' that could allow states to 'absorb shocks, redistribute benefits, and support displaced workers in real time.' Reasons to be skeptical — To be sure, this approach has risks. States investing billions in AI could paradoxically accelerate the very job-automation trends that they're seeking to mitigate. On the flipside, if AI turns out to be less transformative than expected, piling in at the top of the market could bring losses. And just like retail investing, any potential upside is correlated with how much money you have available in the first place. Rich countries will have the opportunity to become richer; poor countries will struggle to participate at all. 'As [transformative AI]-generated wealth risks deepening global inequality, it may also be necessary to explore new models for transnational benefit-sharing,' the paper notes, 'including internationally coordinated sovereign investment vehicles that allocate a portion of AI-derived returns toward global public goods.' Who to Know Person in the news – Elon Musk, owner of xAI Late Wednesday night, with the launch of Grok 4, the AI race got a new leader: Elon Musk. At least, that's if you believe the benchmarks, which show Grok trouncing competition from OpenAI, Google and Anthropic on some of the industry's most difficult tests. On ARC AGI 2, a benchmark designed to be easy for humans but difficult for AIs, Grok 4's reasoning mode scored 16.2%—nearly double that of its closest contender (Claude Opus 4 by Anthropic). Unexpected result — Musk's xAI has not traditionally been seen as a 'frontier' AI company, despite its huge cache of GPU chips. Previous releases of Grok delivered middling performance. And just a day before Grok 4's release, an earlier version of the chatbot had a meltdown on X, repeatedly referring to itself as 'MechaHitler' and sharing violent rape fantasies. (The posts were later deleted.) Episodes like this had encouraged hopes, in at least some corners of the AI world, that the far-right billionaire's attempts to make his bot more 'truth-seeking' were actually making it more stupid. Musk on Grok and AGI — On a livestream broadcast on X on Wednesday night, Musk said Grok 4 had been trained using 10 times as much computing power as Grok 3. 'With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,' he said. On the subject of artificial general intelligence, he said: "Will this be bad or good for humanity? I think it'll be good. Most likely it'll be good. But I've somewhat reconciled myself to the fact that even if it wasn't gonna be good, I'd at least like to be alive to see it happen." What Musk does best — If Grok 4's benchmark scores are borne out, it would mean that Musk's core skill — spinning up cracked engineering teams that are blindly focused on a single goal — is applicable to the world of AI. That will worry those in the industry who care about not just developing AI quickly, but also doing so safely. As the MechaHitler debacle showed, neither Musk nor anybody else yet knows how to prevent current AI systems from going badly out of control. 'If you can't prevent your AI from endorsing Hitler,' says expert forecaster Peter Wildeford, 'how can we trust you with ensuring far more complex future AGI can be deployed safely?' AI in Action Where is AI? Last year, I wrote about a group of researchers who had attempted to answer that question. Now, the team at Epoch AI have gone one step further: they've built an interactive map of more than 500 AI supercomputers, to track exactly where the world's major AI infrastructure is located. The map confirms what I wrote last year: AI compute is concentrated in rich countries, with the vast majority in the US and China, followed by Europe and the Persian Gulf. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What We're Reading They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling by Kashmir Hill in the New York Times Large language models are mysterious, shapeshifting artifacts. Their creators train them to adopt helpful personas—but sometimes these personas can slip, revealing a darker side that can lead some vulnerable users down conspiratorial rabbit holes. That tendency was especially stark earlier this year, when OpenAI shipped an update to ChatGPT that inadvertently caused the bot to become more sycophantic—meaning it would egg-on almost anything a user said, delusional or not. Kashmir Hill, one of the best reporters in the business, spoke to many users who experienced this behavior, and found some shocking personal stories… including one that turned out to be fatal.

In the Loop: Is AI Making the Next Pandemic More Likely?
In the Loop: Is AI Making the Next Pandemic More Likely?

Time​ Magazine

time01-07-2025

  • Science
  • Time​ Magazine

In the Loop: Is AI Making the Next Pandemic More Likely?

Welcome back to In the Loop, TIME's new twice-weekly newsletter about AI. Starting today, we'll be publishing these editions both as stories on and as emails. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Subscribe to In the Loop What to Know If you talk to staff at the top AI labs, you'll hear a lot of stories about how the future could go fantastically well—or terribly badly. And of all the ways that AI might cause harm to the human race, there's one that scientists in the industry are particularly worried about today. That's the possibility of AI helping bad actors to start a new pandemic. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Anthropic's chief scientist, Jared Kaplan, told me in May. Measuring the risk — In a new study published this morning, and shared exclusively with TIME ahead of its release, we got the first hard numbers on how experts think the risk of a new pandemic might have increased thanks to AI. The Forecasting Research Institute polled experts earlier this year, asking them how likely a human-caused pandemic might be—and how likely it might become if humans had access to AI that could reliably give advice on how to build a bioweapon. What they found — Experts, who were polled between December and February, put the risk of a human-caused pandemic at 0.3% per year. But, they said, that risk would jump fivefold, to 1.5% per year, if AI were able to provide human-level virology advice. You can guess where this is going — Then, in April, the researchers tested today's AI tools on a new virology troubleshooting benchmark. They found that today's AI tools outperform PhD-level virologists at complex troubleshooting tasks in the lab. In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. We just published the full story on can read it here. Who to Know Person in the news – Matthew Prince, CEO of Cloudflare. Since its founding in 2009, Cloudflare has been protecting sites on the internet from being knocked offline by large influxes of traffic, or indeed coordinated attacks. Now, some 20% of the internet is covered by its network. And today, Cloudflare announced that this network would begin to block AI crawlers by default — essentially putting a fifth of the internet behind a paywall for the bots that harvest info to train AIs like ChatGPT and Claude. Step back — Today's AI is so powerful because it has essentially inhaled the whole of the internet — from my articles to your profile photos. By running neural networks over that data using immense quantities of computing power, AI companies have taught these systems the texture of the world at such an enormous scale that it has given rise to new AI capabilities, like the ability to answer questions on almost any topic, or to generate photorealistic images. But this scraping has sparked a huge backlash from publishers, artists and writers, who complain that it has been done without any consent or compensation. A new model — Cloudflare says the move will 'fundamentally change how AI companies access web content going forward.' Major publishers, including TIME, have expressed their support for the shift toward an 'opt-in' rather than an 'opt-out' system, the company says. Cloudflare also says it is working on a new initiative, called Pay Per Crawl, in which creators will have the option of setting a price on their data in return for making it available to train AI. Fighting words — Prince was not available for an interview this week. But at a recent conference, he disclosed that traffic to news sites had dropped precipitously across the board thanks to AI, in a shift that many worry will imperil the existence of the free press. 'I go to war every single day with the Chinese government, the Russian government, the Iranians, the North Koreans, probably Americans, the Israelis — all of them who are trying to hack into our customer sites,' Prince said. 'And you're telling me I can't stop some nerd with a C-corporation in Palo Alto?' AI in Action 61% percent of U.S. adults have used AI in the last six months, and 19% interact with it daily, according to a new survey of AI adoption by the venture capital firm Menlo Ventures. But just 3% percent of those users pay for access to the software, Menlo estimated based on the survey's results—suggesting 97% of users only use the free tier of AI tools. AI usage figures are higher for Americans in the workforce than other groups. Some 75% of employed adults have used AI in the last six months, including 26% who report using it daily, according to the survey. Students also report high AI usage: 85% have used it in the last six months, and 22% say they use it daily. The statistics seem to suggest that some students and workers are growing dependent on free AI tools—a usage pattern that might become lucrative if AI companies were to begin restricting access or raising prices. However, the proliferation of open-source AI models has created intense price competition that may limit any single company's ability to dramatically increase their costs. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What we're reading 'The Dead Have Never Been This Talkative': The Rise of AI Resurrection by Tharin Pillay in TIME With the rise of image-to-video tools like the newest version of Midjourney, the world recently crossed a threshold: it's now possible, in just a few clicks, to reanimate a photo of your dead relative. You can train a chatbot on snippets of their writing to replicate their patterns of speech; if you have a long enough clip of them speaking, you can also replicate their voice. Will these tools make it easier to process the heart-rending pain of bereavement? Or might their allure in fact make it harder to move forward? My colleague Tharin published a deeply insightful piece last week about the rise of this new technology. It's certainly a weird time to be alive. Or, indeed, to be dead. Subscribe to In the Loop

‘Succession' creator Jesse Armstrong's ‘Mountainhead' is a too-literal-minded satire
‘Succession' creator Jesse Armstrong's ‘Mountainhead' is a too-literal-minded satire

Boston Globe

time27-05-2025

  • Entertainment
  • Boston Globe

‘Succession' creator Jesse Armstrong's ‘Mountainhead' is a too-literal-minded satire

Ven (Cory Michael Smith, who brought perfect smarm to the young Chevy Chase in ' They all have a direct line to the White House, and they all speak in the clipped, speedy patter of the Roy family — and of the characters from 'In the Loop,' Armando Iannucci's superb 2009 satire about bumbling British and America government operatives that Armstrong co-wrote, receiving an Oscar nomination for best original screenplay. Armstrong has a gift for puncturing balloons of power and hoisting the self-important with their own petards. He's also a wizard with one-liners. Walking into Souper's soulless, sprawling Mountainhead, Jeff asks: 'Was your interior decorator Ayn Bland?' It's a great zinger that holds out hope for light touches that never really arrive. Advertisement (l to r) Steve Carell and Ramy Youssef. Macall Polay/HBO That's a problem, because this material could really use an infusion of levity to make it go down without choking. Make no mistake, 'Mountainhead' is a comedy, with fangs. But too often it also feels like a literal-minded screed or lecture, not unlike the 2021 Netflix satire 'Don't Look Up,' in which Leonardo DiCaprio and Jennifer Lawrence's astronomers struggle to interest the powers that be in the fact that a comet is on an apocalyptic collision course with Earth. See the short-sighted technocrats, whistling past the global graveyard. Except this time they represent the Musks, Zuckerbergs and Altmans of the world, the new power brokers convinced that their obscene net worths somehow equate to forward thinking that's good for the rest of the world. As Ven and his friends/sycophants justify the destruction and chaos they create, it's impossible to miss the echoes of the current broligarchy's crowing. 'Mountainhead' is nothing if not au courant. Advertisement The second half of the movie takes a plot turn that unfortunately for our purposes falls firmly in the spoiler zone. It does give 'Mountainhead' a jolt of focus and energy, and brings some comic clarity to dark questions that linger over the whole affair: What is the human collateral damage of zero-sum tech bro thinking? Are we all mere negotiating chips in some bizarre big picture we can't quite grasp? The second act of 'Mountainhead' feels more concrete and human than the first. It also feels like it belongs to a different movie. 'Succession' aficionados might find themselves flashing back to some of that series' best and most 'Mountainhead'-relevant episodes. There's Season 2's 'Hunting,' in which the Roy family and associates head to a Hungarian mansion for a corporate retreat that becomes a ritual of humiliation (this is often referred to as the 'Boar on the Floor' episode). And Season 4's 'America Decides,' which finds the Roy-run, Fox News-styled network ATN leaning on the levers of power to determine the winner of a U.S. presidential election. These stories, too, express their share of incredulous outrage. But they also move with a dancer's nimble grace. 'Mountainhead' is more like a heavyweight boxer, slugging away. It is satire as blunt-force object. ★★ MOUNTAINHEAD Directed and written by Jesse Armstrong. Starring Steve Carell, Cory Michael Smith, Jason Schwartzman, Ramy Youssef, Hadley Robinson, and Amie MacKenzie. On HBO and Max starting May 31. 108 min. TV-MA (language, mild violence, adult content, wealthy people behaving badly). Advertisement

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store