logo
Why the AI Future Is Unfolding Faster Than Anyone Expected

Why the AI Future Is Unfolding Faster Than Anyone Expected

Bloomberg20-05-2025

AI is improving more quickly than we realize. The economic and societal impact could be massive.
By Brad Stone
May 20, 2025 at 8:30 AM EDT
Share this article
When OpenAI introduced ChatGPT in 2022, people could instantly see that the field of artificial intelligence had dramatically advanced. We all speak a language, after all, and could appreciate how the chatbot answered questions in a fluid, close-to-human style. AI has made immense strides since then, but many of us are—and let me put this delicately—too unsophisticated to notice.
Max Tegmark, a professor of physics at the Massachusetts Institute of Technology, says our limited ability to gather specialized knowledge makes it much harder for us to recognize the disconcerting pace of improvements in technology. Most people aren't high-level mathematicians and may not know that, just in the past few years, AI's mastery has progressed from high-school-level algebra to ninja-level calculus. Similarly, there are relatively few musical virtuosos in the world, but AI has recently become adept at reading sheet music, understanding musical theory, even creating new music in major genres. 'What a lot of people are underestimating is just how much has happened in a very short amount of time,' Tegmark says. 'Things are going very fast now.'
In San Francisco, still for now the center of the AI action, one can track these advances in the waves of new computer learning methods, chatbot features and podcast-propagated buzzwords. In February, OpenAI unveiled a tool called Deep Research that functions like a resourceful colleague, responding to in-depth queries by digging up facts on the web, synthesizing information and generating chart-filled reports. In another major development, both OpenAI and Anthropic—co-founded by Chief Executive Officer Dario Amodei and a breakaway group of former OpenAI engineers—developed tools that let users control whether a chatbot engages in 'reasoning': They can direct it to deliberate over a query for an extended period to arrive at more accurate or thorough answers.
Another fashionable trend is called agentic AI —autonomous programs that can (theoretically) perform tasks for a user without supervision, such as sending emails or booking restaurant reservations. Techies are also buzzing about 'vibe coding'—not a new West Coast meditation practice but the art of positing general ideas and letting popular coding assistants like Microsoft Corp.'s GitHub Copilot or Cursor, made by the startup Anysphere Inc., take it from there.
As developers blissfully vibe code, there's also been an unmistakable vibe shift in Silicon Valley. Just a year ago, breakthroughs in AI were usually accompanied by furrowed brows and wringing hands, as tech and political leaders fretted about the safety implications. That changed sometime around February, when US Vice President JD Vance, speaking at a global summit in Paris focused on mitigating harms from AI, inveighed against any regulation that might impede progress. 'I'm not here this morning to talk about AI safety,' he said. 'I'm here to talk about AI opportunity.'
When Vance and President Donald Trump took office, they dashed any hope of new government rules that might slow the AI juggernauts. On his third day in office, Trump rescinded an executive order from his predecessor, Joe Biden, that set AI safety standards and asked tech companies to submit safety reports for new products. At the same time, AI startups have softened their calls for regulation. In 2023, OpenAI CEO Sam Altman told Congress that the possibility AI could run amok and hurt humans was among his ' areas of greatest concern ' and that companies should have to get licenses from the government to operate new models. At the TED Conference in Vancouver this April, he said he no longer favored that approach, because he'd ' learned more about how the government works.'
It's not unusual in Silicon Valley to see tech companies and their leaders contort their ideologies to fit the shifting political winds. Still, the intensity over the past few months has been startling to watch. Many tech companies have stopped highlighting existential AI safety concerns, shed employees focused on the issue (along with diversity, sustainability and other Biden-era priorities) and become less apologetic about doing business with militaries at home and abroad, bypassing concerns from staff about placing deadly weapons in the hands of AI. Rob Reich, a professor of political science and senior fellow at the Institute for Human-Centered AI at Stanford University, says 'there's a shift to explicitly talking about American advantage. AI security and sovereignty are the watchwords of the day, and the geopolitical implications of building powerful AI systems are stronger than ever.'
If Trump's policies are one reason for the change, another is the emergence of DeepSeek and its talented, enigmatic CEO, Liang Wenfeng. When the Chinese AI startup released its R1 model in the US in January, analysts marveled at the quality of a product from a company that had raised far less capital than its US rivals and was supposedly using data centers with less powerful Nvidia Corp. chips. DeepSeek's chatbot shot to the top of the charts on app stores, and US tech stocks promptly cratered on the possibility that the upstart had figured out a more efficient way to reap AI's gains.
The uproar has quieted since then, but Trump has further restricted the sale of powerful American AI chips to China, and Silicon Valley now watches DeepSeek and its Chinese peers with a sense of urgency. 'Everyone has to think very carefully about what is at stake if we cede leadership,' says Alex Kotran, CEO of the AI Education Project.
Losing to China isn't the only potential downside, though. AI-generated content is becoming so pervasive online that it could soon sap the web of any practical utility, and the Pentagon is using machine learning to hasten humanity's possible contact with alien life. Let's hope they like us. Nor has this geopolitical footrace calmed the widespread fear of economic damage and job losses. Take just one field: computer programming. Sundar Pichai, CEO of Alphabet Inc., said on an earnings call in April that AI now generates 'well over 30%' of all new code for the company's products. Garry Tan, CEO of startup program Y Combinator, said on a podcast that for a quarter of the startups in his winter program, 95% of their lines of code were AI-generated.
MIT's Tegmark, who's also president of an AI safety advocacy organization called the Future of Life Institute, finds solace in his belief that a human instinct for self-preservation will ultimately kick in: Pro-AI business leaders and politicians 'don't want someone to build an AI that will overthrow the government any more than they want plutonium to be legalized.' He remains concerned, though, that the inexorable acceleration of AI development is occurring just outside the visible spectrum of most people on Earth, and that it could have economic and societal consequences beyond our current imagination. 'It sounds like sci-fi,' Tegmark says, 'but I remind you that ChatGPT also sounded like sci-fi as recently as a few years ago.'
More from the AI Issue
DeepSeek's 'Tech Madman' Founder Is Threatening US Dominance in AI Race
The company's sudden emergence illustrates how China's industry is thriving despite Washington's efforts to slow it down.
Microsoft's CEO on How AI Will Remake Every Company, Including His
Nervous customers and a volatile partnership with OpenAI are complicating things for Satya Nadella and the world's most valuable company.
America's Leading Alien Hunters Depend on AI to Speed Their Search
Harvard's Galileo Project has brought high-end academic research to a once-fringe pursuit, and the Pentagon is watching.
How AI Has Already Changed My Job
Workers from different industries talk about the ways they're adapting.
Maybe AI Slop Is Killing the Internet, After All
The assertion that bots are choking off human life online has never seemed more true.
Anthropic Is Trying to Win the AI Race Without Losing Its Soul
Dario Amodei has transformed himself from an academic into the CEO of a $61 billion startup.
Why Apple Still Hasn't Cracked AI
Insiders say continued failure to get artificial intelligence right threatens everything from the iPhone's dominance to plans for robots and other futuristic products.
10 People to Watch in Tech: From AI Startups to Venture Capital
A guide to the people you'll be hearing more about in the near future.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

An Appeal to My Alma Mater
An Appeal to My Alma Mater

Yahoo

time33 minutes ago

  • Yahoo

An Appeal to My Alma Mater

The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. When Maggie Li Zhang enrolled in a college class where students were told to take notes and read on paper rather than on a screen, she felt anxious and alienated. Zhang and her peers had spent part of high school distance learning during the pandemic. During her first year at Pomona College, in Southern California, she had felt most engaged in a philosophy course where the professor treated a shared Google Doc as the focus of every class, transcribing discussions in real time on-screen and enabling students to post comments. So the 'tech-free' class that she took the following semester disoriented her. 'When someone writes something you think: Should I be taking notes too?' she told me in an email. But gradually, she realized that exercising her own judgments about what to write down, and annotating course readings with ink, helped her think more deeply and connect with the most difficult material. 'I like to get my finger oil on the pages,' she told me. Only then does a text 'become ripe enough for me to enter.' Now, she said, she feels 'far more alienated' in classes that allow screens. Zhang, who will be a senior in the fall, is among a growing cohort of students at Pomona College who are trying to alter how technology affects campus life. I attended Pomona from 1998 to 2002; I wanted to learn more about these efforts and the students' outlook on technology, so I recently emailed or spoke with 10 of them. One student wrote an op-ed in the student newspaper calling for more classes where electronic devices are banned. Another co-founded a 'Luddite Club' that holds a weekly tech-free hangout. Another now carries a flip phone rather than a smartphone on campus. Some Pomona professors with similar concerns are limiting or banning electronic devices in their classes and trying to curtail student use of ChatGPT. It all adds up to more concern over technology than I have ever seen at the college. These Pomona students and professors are hardly unique in reacting to a new reality. A generation ago, the prevailing assumption among college-bound teenagers was that their undergraduate education would only benefit from cutting-edge technology. Campus tour guides touted high-speed internet in every dorm as a selling point. Now that cheap laptops, smartphones, Wi-Fi, and ChatGPT are all ubiquitous—and now that more people have come to see technology as detrimental to students' academic and social life—countermeasures are emerging on various campuses. The Wall Street Journal reported last month that sales of old-fashioned blue books for written exams had increased over the past year by more than 30 percent at Texas A&M University and nearly 50 percent at the University of Florida, while rising 80 percent at UC Berkeley over the past two years. And professors at schools such as the University of Virginia and the University of Maryland are banning laptops in class. The pervasiveness of technology on campuses poses a distinct threat to small residential liberal-arts colleges. Pomona, like its closest peer institutions, spends lots of time, money, and effort to house nearly 95 percent of 1,600 students on campus, feed them in dining halls, and teach them in tiny groups, with a student-to-faculty ratio of 8 to 1. That costly model is worth it, boosters insist, because young people are best educated in a closely knit community where everyone learns from one another in and outside the classroom. Such a model ceases to work if many of the people physically present in common spaces absent their minds to cyberspace (a topic that the psychologist Jonathan Haidt has explored in the high-school context). At the same time, Pomona is better suited than most institutions to scale back technology's place in campus life. With a $3 billion endowment, a small campus, and lots of administrators paid to shape campus culture, it has ample resources and a natural setting to formalize experiments as varied as, say, nudging students during orientation to get flip phones, forging a tech-free culture at one of its dining halls, creating tech-free dorms akin to its substance-free options––something that tiny St. John's College in Maryland is attempting––and publicizing and studying the tech-free classes of faculty members who choose that approach. Doing so would differentiate Pomona from competitors. Aside from outliers such as Deep Springs College and some small religious institutions—Wyoming Catholic College has banned phones since 2007, and Franciscan University of Steubenville in Ohio launched a scholarship for students who give up smartphones until they earn their degree—vanishingly few colleges have committed to thoughtful limits on technology. [Jonathan Haidt: Get phones out of schools now] My hope is that Pomona or another liberal-arts college recasts itself from a place that brags about how much tech its incoming students will be able to access––'there are over 160 technology enhanced learning spaces at Pomona,' the school website states––to a place that also brags about spaces that it has created as tech refuges. 'In a time of fierce competition for students, this might be something for a daring and visionary college president to propose,' Susan McWilliams Barndt, a Pomona politics professor, told me. McWilliams has never allowed laptops or other devices in her classes; she has also won Pomona's most prestigious teaching prize every time she's been eligible. 'There may not be a million college-bound teens across this country who want to attend such a school,' she said, 'but I bet there are enough to sustain a vibrant campus or two.' So far, Pomona's leadership has not aligned itself with the professors and students who see the status quo as worse than what came before it. 'I have done a little asking around today and I was not able to find any initiative around limiting technology,' the college's new chief communications officer, Katharine Laidlaw, wrote to me. 'But let's keep in touch. I could absolutely see how this could become a values-based experiment at Pomona.' Pomona would face a number of obstacles in trying to make itself less tech-dependent. The Americans With Disabilities Act requires allowing eligible students to use tools such as note-taking software, closed captioning, and other apps that live on devices. But Oona Eisenstadt, a religious-studies professor at Pomona who has taught tech-free classes for 21 years, told me that, although she is eager to follow the law (and even go beyond it) to accommodate her students, students who require devices in class are rare. If a student really needed a laptop to take notes, she added, she would consider banning the entire class from taking notes, rather than allowing the computer. 'That would feel tough at the beginning,' she said, but it 'might force us into even more presence.' Ensuring access to course materials is another concern. Amanda Hollis-Brusky, a professor of politics and law, told me that she is thinking of returning to in-class exams because of 'a distinct change' in the essays her students submit. 'It depressed me to see how often students went first to AI just to see what it spit out, and how so much of its logic and claims still made their way into their essays,' she said. She wants to ban laptops in class too––but her students use digital course materials, which she provides to spare them from spending money on pricey physical texts. 'I don't know how to balance equity and access with the benefits of a tech-free classroom,' she lamented. Subsidies for professors struggling with that trade-off is the sort of experiment the college could fund. Students will, of course, need to be conversant in recent technological advances to excel in many fields, and some courses will always require tech in the classroom. But just as my generation has made good use of technology, including the iPhone and ChatGPT, without having been exposed to it in college, today's students, if taught to think critically for four years, can surely teach themselves how to use chatbots and more on their own time. In fact, I expect that in the very near future, if not this coming fall, most students will arrive at Pomona already adept at using AI; they will benefit even more from the college teaching them how to think deeply without it. Perhaps the biggest challenge of all is that so many students who don't need tech in a given course want to use it. 'In any given class I can look around and see LinkedIn pages, emails, chess games,' Kaitlyn Ulalisa, a sophomore who grew up near Milwaukee, wrote to me. In high school, Ulalisa herself used to spend hours every day scrolling on Instagram, Snapchat, and TikTok. Without them, she felt that she 'had no idea what was going on' with her peers. At Pomona, a place small enough to walk around campus and see what's going on, she deleted the apps from her phone again. Inspired by a New York Times article about a Luddite Club started by a group of teens in Brooklyn, she and a friend created a campus chapter. They meet every Friday to socialize without technology. Still, she said, for many college students, going off TikTok and Instagram seems like social death, because their main source of social capital is online. [From the September 2017 issue: Have smartphones destroyed a generation?] Accounts like hers suggest that students might benefit from being forced off of their devices, at least in particular campus spaces. But Michael Steinberger, a Pomona economics professor, told me he worries that an overly heavy-handed approach might deprive students of the chance to learn for themselves. 'What I hope that we can teach our students is why they should choose not to open their phone in the dining hall,' he said. 'Why they might choose to forgo technology and write notes by hand. Why they should practice cutting off technology and lean in to in-person networking to support their own mental health, and why they should practice the discipline of choosing this for themselves. If we limit the tech, but don't teach the why, then we don't prepare our students as robustly as we might.' Philosophically, I usually prefer the sort of hands-off approach that Steinberger is advocating. But I wonder if, having never experienced what it's like to, say, break bread in a dining hall where no one is looking at a device, students possess enough data to make informed decisions. Perhaps heavy-handed limits on tech, at least early in college, would leave them better informed about trade-offs and better equipped to make their own choices in the future. What else would it mean for a college-wide experiment in limited tech to succeed? Administrators would ideally measure academic outcomes, effects on social life, even the standing of the college and its ability to attract excellent students. Improvements along all metrics would be ideal. But failures needn't mean wasted effort if the college publicly shares what works and what doesn't. A successful college-wide initiative should also take care to avoid undermining the academic freedom of professors, who must retain all the flexibility they currently enjoy to make their own decisions about how to teach their classes. Some will no doubt continue with tech-heavy teaching methods. Others will keep trying alternatives. Elijah Quetin, a visiting instructor in physics and astronomy at Pomona, told me about a creative low-tech experiment that he already has planned. Over the summer, Quetin and six students (three of them from the Luddite Club) will spend a few weeks on a ranch near the American River; during the day, they will perform physical labor—repairing fencing, laying irrigation pipes, tending to sheep and goats—and in the evening, they'll undertake an advanced course in applied mathematics inside a barn. 'We're trying to see if we can do a whole-semester course in just two weeks with no infrastructure,' he said. He called the trip 'an answer to a growing demand I'm hearing directly from students' to spend more time in the real world. It is also, he said, part of a larger challenge to 'the mass-production model of higher ed,' managed by digital tools 'instead of human labor and care.' Even in a best-case scenario, where administrators and professors discover new ways to offer students a better education, Pomona is just one tiny college. It could easily succeed as academia writ large keeps struggling. 'My fear,' Gary Smith, an economics professor, wrote to me, 'is that education will become even more skewed with some students at elite schools with small classes learning critical thinking and communication skills, while most students at schools with large classes will cheat themselves by using LLMs'—large language models—'to cheat their way through school.' But successful experiments at prominent liberal-arts colleges are better, for everyone, than nothing. While I, too, would lament a growing gap among college graduates, I fear a worse outcome: that all colleges will fail to teach critical thinking and communication as well as they once did, and that a decline in those skills will degrade society as a whole. If any school provides proof of concept for a better way, it might scale. Peer institutions might follow; the rest of academia might slowly adopt better practices. Some early beneficiaries of the better approach would meanwhile fulfill the charge long etched in Pomona's concrete gates: to bear their added riches in trust for mankind. Article originally published at The Atlantic

We need guardrails for artificial superintelligence NOW — before it's too late
We need guardrails for artificial superintelligence NOW — before it's too late

New York Post

time2 hours ago

  • New York Post

We need guardrails for artificial superintelligence NOW — before it's too late

America's 'AI race with China' is a headline we see more and more. But we're actually in two high-stakes races with China in artificial intelligence. First: a competition for commercial dominance that is reshaping economies, military power and global influence. The second race, though less visible, has the potential to be even more existential: a sprint toward artificial superintelligence. What's ASI? Unlike current AI models trained to perform relatively narrow tasks, ASI refers to a hypothetical future version of AI that exceeds human intelligence across every domain — creative, strategic, even emotional. It could be capable of autonomously improving itself, outpacing our ability to control or predict it. This technology doesn't yet exist, but leading experts, industry leaders and lawmakers believe its emergence could be possible within the next decade. That's the problem: It may not feel urgent — until it's too late. Which is why the time to act is now. President Donald Trump and his team are in a unique position to secure America's preeminence on both fronts by winning the commercialization race and negotiating what may be the most consequential diplomatic deal since the nuclear-arms treaties of the Cold War. China's advancements in commercial AI have dramatically closed America's lead on the rest of the world. How? Beijing bought, stole and downloaded US technology, leading to breakthroughs that resemble a modern-day Sputnik moment. Chinese firms are unveiling AI models that are both cheaper and more sophisticated than we knew possible (remember our reaction to DeepSeek?). China's state-directed pursuit extends far beyond economic ambitions. The Chinese Communist Party openly seeks a technological dominance that's anchored in its own core principles: surveillance, censorship, and control. A Chinese-led AI era risks embedding these authoritarian pillars into the digital fabric of global civilization and everyday life. An unregulated race toward ASI presents an even deeper danger. Influential forecasts — notably AI 2027, a predictive framework developed by key experts — warn that the emergence of ASI could pose unprecedented risks to humanity. Yes, these risks are still theoretical, but they're also not so far-fetched. In the hands of an adversary, an ASI system has the potential to destroy global electrical grids, develop incurable super-viruses or empty every bank account in the world. That may sound like the latest plot in Tom Cruise's 'Mission Impossible,' but it's also the plausible consequence of unchecked superintelligence in the wrong hands. Most top AI leaders believe ASI could materialize within this decade and pose unprecedented risk. Ilya Sutskever, co-founder and chief scientist of OpenAI, told his researchers: 'We're definitely going to build a bunker before we release [artificial general intelligence].' So, if bunkers are the recommended precaution for AGI, what should we prepare for ASI? Vice President JD Vance appears to be grappling with these risks, as he reportedly explores the possibility of a Vatican-brokered diplomatic slowdown of the ASI race between the United States and China. Pope Leo XIV symbolizes precisely the kind of neutral, morally credible mediator capable of convening such crucial talks — and if the Cold War could produce nuclear-arms treaties, then surely today's AI arms race demands at least an attempt at serious discussion. Skeptics naturally and reasonably question why China would entertain such negotiations, but Beijing has subtly acknowledged these undeniable dangers as well. Some analysts claim Xi Jinping himself is an 'AI doomer' who understands the extraordinary risk. Trump is uniquely positioned to lead here. He can draw a clear line: America will outcompete China in commercial AI, no apologies. But when it comes to ASI, the stakes are too high for brinkmanship. We need enforceable rules, verification mechanisms, diplomatic pressure and, yes, moral clarity — before this issue gets ahead of us. During the Cold War, Presidents Dwight Eisenhower, John Kennedy and Ronald Reagan all knew that competing militarily didn't mean refusing to negotiate guardrails. Reagan's mantra — 'trust but verify' — is just as relevant for ASI as it was for nuclear arms. This is President Trump's opportunity. He can drive the AI economy forward, infusing American founding principles into global AI adoption, while leading a parallel effort to prevent catastrophe. Done right, this would be the most consequential diplomatic initiative since the Strategic Arms Reduction Treaty. And it wouldn't come at the cost of American strength; it would cement it. We've reached a crossroads. The commercialization of AI can secure America's future, but the weaponization of superintelligence could end it. Chris Stewart was a member of Congress from Utah from 2013 to 2023. Mark Beall is president of the AI Policy Network.

OpenAI CEO Sam Altman says AI could replace interns — but there's still hope for Gen Z
OpenAI CEO Sam Altman says AI could replace interns — but there's still hope for Gen Z

Tom's Guide

time2 hours ago

  • Tom's Guide

OpenAI CEO Sam Altman says AI could replace interns — but there's still hope for Gen Z

Entry-level jobs as we know them could soon be a thing of the past. OpenAI CEO Sam Altman says AI can now effectively do the same work as junior-level employees, and its skillset is only expected to get even better in the coming months. He predicted that AI will eventually rival the skills of even an experienced engineer, all while being uniquely capable of operating continuously for days on end without breaks. 'Today [AI] is like an intern that can work for a couple of hours but at some point it'll be like an experienced software engineer that can work for a couple of days,' Altman told a panel this week alongside Snowflake CEO Sridhar Ramaswamy at Snowflake Summit 2025. Altman added that in the next year, we could see AI solving complex business problems autonomously. 'I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are very non-trivial," he said. It's a bold prediction we've heard echoed by other tech CEOs like Nvidia's Jensen Huang, who warned that those who hesitate to embrace AI may find themselves at the unemployment office. 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI," he said at last month's Milken Institute conference. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Generative AI stands poised to make entry-level jobs obsolete at a time when Generation Z is solidifying its place in the workforce, but that hasn't stopped Gen Z from embracing the technology. A recent Resume survey found that while one in 10 workers reported using ChatGPT regularly, Gen Z workers were twice as likely to use the tool. The same study found that the vast majority of workers at any age see ChatGPT as a helpful tool. But over half of Gen Z workers considered it the equivalent of another co-worker or assistant, compared to 40% of millennials and 35% of older generations. Altman has broken down the generational differences in AI usage before: '[It's a] gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their twenties and thirties use it as like a life advisor, and then, like people in college use it as an operating system,' he said at Sequoia Capital's AI Ascent event in May. Even as Gen Z embraces AI, some tech leaders have been sounding the alarm bells about the economic fallout of an AI-driven job market. Anthropic CEO Dario Amodei recently told Axios that AI could wipe out half of all entry-level white collar jobs, causing unemployment to skyrocket by 10% to 20%. OpenAI owns ChatGPT, a revolutionary chatbot AI that, since its release in 2022, has quickly become one of the most advanced and widely used AI tools in the world. Powered by OpenAI's latest model, GPT-4o, ChatGPT can help you plan your weekend, write a term paper, or any number of other tasks. It supports everything from real-time speech interaction to multimodal content creation — and you can get many of its most powerful features for free. If you're curious, be sure to check out our guide on how to use ChatGPT, as well as these tips to get the most out of ChatGPT.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store