The age of incredibly powerful 'manager nerds' is upon us, Anthropic cofounder says
Managers need to have "soft skills" like communication alongside harder technical skills. But what if the job becomes more about managing AI agents than people?
Anthropic cofounder Jack Clark says AI agents are ushering in an era of the "nerd-turned-manager."
"I think it's actually going to be the era of the manager nerds now, where I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful," he said on an episode of the "Conversations with Tyler" podcast last week.
"We're going to see this rise of the nerd-turned-manager who has their people, but their people are actually instances of AI agents doing large amounts of work for them," he added.
Clark said he's already seeing this play out with some startups that have "very small numbers of employees relative to what they used to have because they have lots of coding agents working for them."
He's not the only tech exec to predict AI agents will let teams do more with fewer people.
Meta CEO Mark Zuckerberg said at the Stripe Sessions conference last week that tapping into AI can help entrepreneurs "focus on the core idea" of their business and operate with "very small, talent-dense teams."
"If you were starting whatever you're starting 20 years ago, you would have had to have built up all these different competencies inside your company, and now there are just great platforms to do it," Zuckerberg said.
Y Combinator CEO Garry Tan said in March that he thinks "vibe coding" — or using generative AI tools to quickly develop and experiment in software development — will help smaller startup teams do the work of 50 to 100 engineers.
"People are getting to a million dollars to 10 million dollars a year revenue with under 10 people, and that's really never happened before in early stage venture," Tan said. "You can just talk to the large language models and they will code entire apps."
AI researchers and other experts have warned there are risks to over-reliance on the technology, especially as a replacement to human manpower, including LLMs having hallucinations and concerns that vibe coding can make it harder in some instances to scale and debug code.
Mike Krieger, the cofounder of Instagram and chief people officer at Anthropic, said on a podcast earlier this year that he predicts a software developer's job will change in the next three years to focus more on double-checking code generated by AI rather than writing it themselves.
"How do we evolve from being mostly code writers to mostly delegators to the models and code reviewers?" he said on the " 20VC" podcast.
The job will be about "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale," he added.
A spokesperson for Anthropic previously told BI the company sees itself as a "testbed" for workplaces navigating AI-driven changes to critical roles.
"At Anthropic, we're focused on developing powerful and responsible AI that works with people, not in place of them," the spokesperson said. "As Claude rapidly advances in its coding capabilities for real-world tasks, we're observing developers gradually shifting toward higher-level responsibilities."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
36 minutes ago
- Fast Company
Teaching AI isn't enough—we need to teach wisdom, too
Artificial intelligence is shaking the intellectual, emotional, and economic foundations of the world. A glance at mainstream or social media confirms that the world ahead will look nothing like the one we're leaving behind. Technological disruption is nothing new. From bronze smelting in Benin and steel forging in Japan to Themistocles's naval buildup in ancient Greece, history shows that transformative technologies spark societal shifts and national urgency. Today's urgency is AI. The White House's recent executive order (EO) on AI education echoes past anxieties—this time, about China's rapid advancement. You may have missed this EO amid the recent flood of them. But it's a pivotal moment. Though well-intentioned, the EO lacks the depth needed for a truly informed AI educational policy. The EO defines its mission as providing 'opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology.' It outlines three imperatives: 'Expose our students to AI at an early age.' Train teachers to 'effectively incorporate AI into their teaching methods.' Promote AI literacy to 'develop an AI-ready workforce.' These steps are necessary. AI is a profound shift, one that exposes long-standing deficiencies in our educational system—particularly our neglect of science, technology, engineering, and mathematics (STEM). Still, the EO falls short in three key areas. Speaking as president and CEO of the Center of Science and Industry, a board member of the National Academies, and a lifelong STEM advocate, I say this: You cannot teach AI without also teaching critical thinking, ethics, and wisdom. Our national conversation must expand beyond technical training. As AI (and eventually artificial general intelligence) integrates into every part of life, we face a stark choice: Do we become passive consumers of knowledge, or do we intentionally cultivate wisdom? Technical proficiency alone turns us into carbon versions of AI. Instead, we need a cultural shift—one that champions critical thinking, ethical reasoning, and curiosity in classrooms, workplaces, and homes. The goal isn't just to understand AI, but to navigate the world it creates. Techno-optimism must be balanced with rigorous intellectual and moral interrogation—or the 'doomers' may be right. Though the EO doesn't address the human-AI relationship, I'll give it the benefit of the doubt—it's not a full policy, but a starting point. I hope future policy goes further, confronting AI's risks and outlining how education and society should respond—both philosophically and practically. For what it's worth, my ideal AI curriculum would include more than practical skills. It would explore: Martin Heidegger's insights on how technology shapes experience Nick Bostrom's ' paper clip ' thought experiment Shoshana Zuboff's critique of surveillance capitalism Soon, AI won't need to be taught—it will be omnipresent. In the 1990s, we trained students to use a mouse and browse the web. But intuitive design soon made that obsolete. The same is happening with AI—only faster. Rather than focus on today's tools, AI education should teach how to understand technology's evolution. Computer scientist Alan Kay once said, 'Technology is anything that was invented after you were born.' Maintaining global leadership requires more than technical prowess—it demands cultural vision. After Sputnik, America feared falling behind in the space race. In the 1990s, it was Japan. Now, it's China. But the true question is: Which nation will use AI to become the better society? French philosopher and diplomat Alexis de Tocqueville once said, 'America is great because it is good. If America ceases to be good, America will cease to be great.' That quote echoes as I reflect on the EO and our future. To lead in AI, we must prioritize wisdom over raw intelligence. That greatness won't come from executive orders—but from the strength of our social order.


New York Times
an hour ago
- New York Times
OpenAI Seems to Be Making a Very Familiar, Very Cynical Choice
Last spring, Sam Altman, the chief executive of OpenAI, sat in the chancel of Harvard Memorial Church, sermonizing against advertising. 'I will disclose, just as a personal bias, that I hate ads,' he began in his usual calm cadence, one sneakered foot crossed onto his lap. He said that ads 'fundamentally misalign a user's incentives with the company providing the service,' adding that he found the notion of mixing advertising with artificial intelligence — the product his company is built on — 'uniquely unsettling.' The comment reminded me immediately of something I'd heard before, from around the time I was first getting online. It came from a seminal paper that Sergey Brin and Larry Page wrote in 1998, when they were at Stanford developing Google. They argued that advertising often made search engines less useful, and that companies that relied on it would 'be inherently biased towards the advertisers and away from the needs of the consumers.' I showed up at Stanford as a freshman in 2000, not long after Mr. Brin and Mr. Page had accepted a $25 million round of venture capital funding to turn their academic project into a business. My best friend there persuaded me to try Google, describing it as more ethical than the search engines that had come before. What we didn't realize was that in the midst of the dot-com crash, which coincided with our arrival, Google's investors were pressuring the co-founders to hire a more experienced chief executive. Mr. Brin and Mr. Page brought in Eric Schmidt, who in turn hired Sheryl Sandberg, the chief of staff to Lawrence H. Summers when he was Treasury secretary, to build an advertising program. Filing for Google to go public a couple of years later, Mr. Brin and Mr. Page explained away the reversal of their anti-advertising stance by telling shareholders that ads made Google more useful because they provided what the founders called 'great commercial information.' My senior year, news filtered into The Stanford Daily, where I worked, that Facebook — which some of us had heard about from friends at Harvard, where it had started — was coming to our campus. 'I know it sounds corny, but I'd love to improve people's lives, especially socially,' Mark Zuckerberg, Facebook's co-founder, told The Daily's reporter. He added, 'In the future we may sell ads to get the money back, but since providing the service is so cheap, we may choose not to do that for a while.' Mr. Zuckerberg went on to quit Harvard and move to Palo Alto, Calif. I went on to The Wall Street Journal. Covering Facebook in 2007, I got a scoop that Facebook — which had in fact introduced ads — would begin using data from individual users and their 'friends' on the site to sharpen how ads were targeted to them. Like Google before it, Facebook positioned this as being good for users. Mr. Zuckerberg even brought Ms. Sandberg over from Google to help. When an economic downturn, followed by an I.P.O., later put pressure on Facebook, it followed Google's playbook: doubling down on advertising. In this case, it did so by collecting and monetizing even more personal information about its users. Want all of The Times? Subscribe.


New York Times
an hour ago
- New York Times
This A.I. Company Wants to Take Your Job
Years ago, when I started writing about Silicon Valley's efforts to replace workers with artificial intelligence, most tech executives at least had the decency to lie about it. 'We're not automating workers, we're augmenting them,' the executives would tell me. 'Our A.I. tools won't destroy jobs. They'll be helpful assistants that will free workers from mundane drudgery.' Of course, lines like those — which were often intended to reassure nervous workers and give cover to corporate automation plans — said more about the limitations of the technology than the motives of the executives. Back then, A.I. simply wasn't good enough to automate most jobs, and it certainly wasn't capable of replacing college-educated workers in white-collar industries like tech, consulting and finance. That is starting to change. Some of today's A.I. systems can write software, produce detailed research reports and solve complex math and science problems. Newer A.I. 'agents' are capable of carrying out long sequences of tasks and checking their own work, the way a human would. And while these systems still fall short of humans in many areas, some experts are worried that a recent uptick in unemployment for college graduates is a sign that companies are already using A.I. as a substitute for some entry-level workers. On Thursday, I got a glimpse of a post-labor future at an event held in San Francisco by Mechanize, a new A.I. start-up that has an audacious goal of automating all jobs — yours, mine, those of our doctors and lawyers, the people who write our software and design our buildings and care for our children. 'Our goal is to fully automate work,' said Tamay Besiroglu, 29, one of Mechanize's founders. 'We want to get to a fully automated economy, and make that happen as fast as possible.' Want all of The Times? Subscribe.