
How To Prepare Teams For Their New AI Agent Coworkers
Photo by Alex Kotliarskyi on Unsplash
At the end of February, Salesforce CEO Marc Benioff made a shocking announcement: he wouldn't be hiring any more software engineers in 2025. Instead, he'd be going all in on Agentforce, the company's agentic AI system designed to handle multi-step processes all on its own.
Benioff is a true believer in the power of agentic AI to revolutionize how work gets done. At the World Economic Forum in Davos this year, he also painted a picture of how agents will integrate into the workforce—in sales, for example, human agents will assume new roles as their AI counterparts resolve support inquiries.
"We are really moving into a world now of managing humans and agents together," he said.
I'm inclined to agree. AI agents are still new, and there are still a lot of kinks to work out before they reach their full potential. Still, effective leaders should be thinking about how their new blended teams will look—and how they can best equip their existing, human workers for this new reality.
There is a right way for leaders to begin integrating agents onto teams—and then there is the other way. Klarna CEO Sebastian Siemiatkowski has taken the second route, pronouncing that partially automating his human workforce has saved his company millions, and that he is 'of the opinion that AI can already do all of the jobs that we, as humans, do.'
This isn't actually true, at least not yet—but it is a great way to alarm your employees, zapping them of their motivation.
Instead, leaders should set their teams up for success by helping to identify opportunities for AI-human collaboration. One way to do this is for teams to ensure that agents are working with good data, weeding out wrong or incomplete information and regularly optimizing and providing feedback.
Additionally, the more employees actually work with agents, the more they'll understand their abilities—and limitations. For example, Jotform's recently-released AI agent is already being heavily used by our own support team to cut down on routine requests and make their lives easier. I've made it clear that these new automated teammates are not out for their jobs—instead, they're significantly increasing the speed and volume at which we're able to address issues.
Another one of Siemiatkowski's divisive sound bites is that AI is so advanced, it's even capable of replacing CEOs like him, arguing that AI reasoning has reached C-suite levels.
Don't get me wrong—AI agents are excellent at many things, and can capably complete many tasks from end-to-end all on their own. There's a reason that, according to one survey, 51 percent of organizations are exploring the use of AI agents, with 37 percent actively piloting them.
But AI, agentic and otherwise, is not especially creative. It can, when prompted, offer ideas and serve as a useful sounding board. It's also great at generating drafts and outlines, reducing the 'blank page' anxiety that torments so many of us in the early stages of the creative process. But for now at least, AI is not cut out for doing work that involves high levels of empathy, nor is it good at making gut-based decisions where little information is available.
For leaders and employees alike, our ability to form relationships, think outside the box and make empathetic decisions is where we shine. Research shows that most employees—83 percent—believe that AI will make uniquely human skills even more crucial, while 76 percent crave more human connection as AI grows.
As AI agents enter the workplace, leaders should empower their teams to spend more time flexing their creative muscles. Encourage an environment where employees are free to brainstorm, collaborate, and test-drive ideas—all while using agents to facilitate the process.
Say a team is kicking around a new product idea. An AI agent can gather market intelligence from customer service logs, social media, and internal analytics, testing its viability and accomplishing in minutes what would have previously taken days (and far more resources). This increased efficiency, in turn, creates the space for even more collaborative brainstorming, effectively propelling the creative cycle forward.
Employees at every level will need to get comfortable delegating routine tasks to AI—and some will be more receptive than others. Slack even published an 'AI persona quiz' to determine how readily people will accept agentic teammates, with results ranging from 'Maximalists,' who are all aboard the AI train; to 'Observers,' who are cautiously watching from the sidelines.
A good AI policy accounts for this range of comfort levels. But oddly, data from Slack found that 40 percent of workers said their organization has no AI policy at all; 48 percent reported they would hesitate to tell their managers they were using it for their work.
This is the opposite of how it should be. Rather than remaining mum on the subject of AI, leaders should be having proactive conversations about their expectations for its usage, and how they anticipate the adoption of agents to change workflows. Saying nothing not only fails to help your teams unlock the incredible benefits of agentic AI, it also creates the perfect environment for confusion and fear to take hold.
AI agents are changing how work gets done, for the better. But the transition will not happen automatically. Adoption will require strong and transparent policies, where teams are not treated as afterthoughts to exciting new technology, but as an essential part of the journey.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
17 minutes ago
- Forbes
The AI Paradox: When More AI Means Less Impact
Young business man with his face passing through the screen of a laptop on binary code background AI is in the news every day. On the one hand, this highlights the vertiginous speed at which the field is developing. On the other, it creates a sense saturation and angst that makes business organizations either drop the subject altogether or go at it full throttle without much discernment. Both approaches will lead to major misses in the inevitable AI-fication of business. In this article, I'll explore what happens when a business goes down the AI rabbit hole without a clear business objective and a solid grasp of the available alternatives. If you have attended any AI conference lately, chances are that, by the end, you thought your business was dangerously behind. Many of these events, even if not on purpose, can leave you with the feeling that you need to deploy AI everywhere and automate everything to catch up. If you've succumbed to this temptation, you most likely found out that is not the right move. Two years into the generative AI revolution, a counterintuitive truth is emerging from boardrooms to factory floors. Companies pursuing 100% AI automation are often seeing diminished returns, while those treating AI as one element in a broader, human-centered workflow are capturing both cost savings and competitive advantages. The obvious truth is already revealing itself: AI is just one more technology at our disposal, and just like every other new technology, everyone is trying to gain first-move advantage, which inevitably creates chaos. Those who see through and beyond said chaos are building the foundations of a successful AI-assisted business. The numbers tell a story that contradicts the automation evangelists. Three in four workers say AI tools have decreased their productivity and added to their workload, according to a recent UpWork survey of 2,500 respondents across four countries. Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save. Even more revealing: while 85% of company leaders are pushing workers to use AI, nearly half of employees using AI admitted they have no idea how to achieve the productivity gains their employers expect. This disconnect isn't just corporate misalignment—it's a fundamental misunderstanding of how AI creates value. The companies winning the AI game aren't those deploying the most algorithms. They're the ones who understand that intelligent automation shouldn't rely on AI alone. Instead, successful organizations are orchestrating AI within broader process frameworks where human expertise guides strategic decisions while AI handles specific, well-defined tasks. A good AI strategy always revolves around domain experts, not the other way around. Consider how The New York Times approached AI integration. Rather than replacing journalists with AI, the newspaper introduced AI tools for editing copy, summarizing information, and generating promotional content, while maintaining strict guidelines that AI cannot draft full articles or significantly alter journalism. This measured approach preserves editorial integrity while amplifying human capabilities. AI should be integrated strategically and operationally into entire processes, not deployed as isolated solutions to be indiscriminately exploited hoping for magic. Research shows that 60% of business and IT leaders use over 26 systems in their automation efforts, and 42% cite lack of integration as a major digital transformation hurdle. The most effective AI implementations focus on task-specific applications rather than general automation. Task-specific models offer highly specialized solutions for targeted problems, making them more efficient and cost-effective than general-purpose alternatives. Harvard Business School research involving 750 Boston Consulting Group consultants revealed this precision matters enormously. While consultants using AI completed certain tasks 40% faster with higher quality, they were 19 percentage points less likely to produce correct answers on complex tasks requiring nuanced judgment. This 'jagged technological frontier' demands that organizations implement methodical test-and-learn approaches rather than wholesale AI adoption. Harvard Business Review research confirms that AI notoriously fails at capturing intangible human factors essential for real-world decision-making—ethical considerations, moral judgments, and contextual nuances that guide business success. The companies thriving in 2025 aren't choosing between humans and machines. They're building hybrid systems where AI automation is balanced with human interaction to maintain stakeholder trust and capture value that neither could achieve alone. The mantra, 'AI will replace your job,' seems to consistently reveal a timeless truth: everything that should be automated will be automated, not everything than can be automated will. The Path Forward The AI paradox isn't a failure of technology—it's a lesson in implementation strategy. Organizations that resist the allure of complete automation and instead focus on thoughtful integration, task-specific deployment, and human-AI collaboration aren't just avoiding the productivity trap. They're building sustainable competitive advantages that compound over time. The question isn't whether your organization should use AI. It's whether you'll fall into the 'more AI' trap or master the art of 'smarter AI'—where less automation actually delivers more impact.

Yahoo
23 minutes ago
- Yahoo
The AI lobby plants its flag in Washington
Top artificial intelligence companies are rapidly expanding their lobbying footprint in Washington — and so far, Washington is turning out to be a very soft target. Two privately held AI companies, OpenAI and Anthropic — which once positioned themselves as cautious, research-driven counterweights to aggressive Big Tech firms — are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI. They have company. Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams, a spokesperson told POLITICO. In late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists. AI lobbyists are 'very visible' and 'very present on the hill,' said Rep. Don Beyer (D-Va.) in an interview at the Special Competitive Studies Project AI+ Expo this week. 'They're nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It's really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it's only to stop us from doing anything.' This lobbying push aims to capitalize on a wave of support from both the Trump administration and the Republican Congress, both of which have pumped up the AI industry as a linchpin of American competitiveness and a means for shrinking the federal workforce. They don't all present a unified front — Anthropic, in particular, has found itself at odds with conservatives, and on Thursday its CEO Dario Amodei broke with other companies by urging Congress to pass a national transparency standard for AI companies — but so far the AI lobby is broadly getting what it wants. 'The overarching ask is for no regulation or for light-touch regulation, and so far, they've gotten that," said Doug Calidas, senior vice president of government affairs for the AI policy nonprofit Americans for Responsible Innovation. In a sign of lawmakers' deference to industry, the House passed a ten-year freeze on enforcing state and local AI regulation as part of its megabill that is currently working through the Senate. Critics, however, worry that the AI conversation in Washington has become an overly tight loop between companies and their GOP supporters — muting important concerns about the growth of a powerful but hard-to-control technology. 'There's been a huge pivot for [AI companies] as the money has gotten closer,' Gary Marcus, an AI and cognitive science expert, said of the leading AI firms. 'The Trump administration is too chummy with the big tech companies, and basically ignoring what the American people want, which is protection from the many risks of AI.' Anthropic declined to comment for this story, referring POLITICO to its March submission to the AI Action Plan that the White House is crafting after President Donald Trump repealed a sprawling AI executive order issued by the Biden administration. OpenAI, too, declined to comment. This week several AI firms, including OpenAI, co-sponsored the Special Competitive Studies Project's AI+ Expo, an annual Washington trade show that has quickly emerged as a kind of bazaar for companies trying to sell services to the government. (Disclosure: POLITICO was a media partner of the conference.) They're jostling for influence against more established government contractors like Palantir, which has been steadily building up its lobbying presence in D.C. for years, while Meta, Google, Amazon and Microsoft — major tech platforms with AI as part of their pitch — already have dozens of lobbyists in their employ. What the AI lobby wants is a classic Washington twofer: fewer regulations to limit its growth, and more government contracts. The government budget for AI has been growing. Federal agencies across the board — from the Department of Defense and the Department of Energy to the IRS and the Department of Veterans Affairs — are looking to build AI capacity. The Trump administration's staff cuts and automation push is expected to accelerate the demand for private firms to fill the gap with AI. For AI, 'growth' also demands energy and, on the policy front, AI companies have been a key driver of the recent push in Congress and the White House to open up new energy sources, streamline permitting for building new data centers and funnel private investment into the construction of these sites. Late last year, OpenAI released an infrastructure blueprint for the U.S. urging the federal government to prepare for a massive spike in demand for computational infrastructure and energy supply. Among its recommendations: creating special AI zones to fast-track permits for energy and data centers, expanding the national power grid and boosting government support for private investment in major energy projects. Those recommendations are now being very closely echoed by Trump administration figures. Last month, at the Bitcoin 2025 Conference in Las Vegas, David Sacks — Trump's AI and crypto czar — laid out a sweeping vision that mirrored the AI industry's lobbying goals. Speaking to a crowd of 35,000, Sacks stressed the foundational role of energy for both AI and cryptocurrency, saying bluntly: 'You need power.' He applauded President Donald Trump's push to expand domestic oil and gas production, framing it as essential to keeping the U.S. ahead in the global AI and crypto race. This is a huge turnaround from a year ago, when AI companies faced a very different landscape in Washington. The Biden administration, and many congressional Democrats, wanted to regulate the industry to guard against bias, job loss and existential risk. No longer. Since Trump's election, AI has become central to the conversation about global competition with China, with Silicon Valley venture capitalists like Sacks and Marc Andreessen now in positions of influence within the Trump orbit. Trump's director of the Office of Science and Technology Policy is Michael Kratsios, former managing director at Scale AI. Trump himself has proudly announced a series of massive Gulf investment deals in AI. Sacks, in his Las Vegas speech, pointed to those recent deal announcements as evidence of what he called a 'total comprehensive shift' in Washington's approach to emerging technologies. But as the U.S. throws its weight behind AI as a strategic asset, critics warn that the enthusiasm is muffling one of the most important conversations about AI: its ability to wreak unforeseen harm on the populace, from fairness to existential risk concerns. Among those concerns: bias embedded in algorithmic decisions that affect housing, policing, and hiring; surveillance that could threaten civil liberties; the erosion of copyright protections, as AI models hoover up data and labor protections as automation replaces human work. Kevin De Liban, founder of TechTonic Justice, a nonprofit that focuses on the impact of AI on low income communities, worries that Washington has abandoned its concerns for AI's impact on citizens. 'Big Tech gets fat government contracts, a testing ground for their technologies, and a liability-free regulatory environment,' he said, of Washington's current AI policy environment. 'Everyday people are left behind to deal with the fallout.' There's a much larger question, too, which dominated the early AI debate: whether cutting-edge AI systems can be controlled at all. These risks, long documented by researchers, are now taking a back seat in Washington as the conversation turns to economic advantage and global competition. There's also the very real concern that if an AI company does bring up the technology's worst-case scenarios, it may find itself at odds with the White House itself. Anthropic CEO Amodei said in a May interview that labor force disruptions due to AI would be severe — which triggered a direct attack from Sacks, Trump's AI czar, on his podcast, who said that line of thinking led to 'woke AI.' Still, both Anthropic and OpenAI are going full steam ahead. Anthropic hired nearly a dozen policy staffers in the last two months, while OpenAI similarly grew its policy office over the past year. They're also pushing to become more important federal contractors by getting critical FedRAMP authorizations — a federal program that certifies cloud services for use across government — which could unlock billions of dollars in contracts. As tech companies grow increasingly cozy with the government, the political will to regulate them is fading — and in fact, Congress appears hostile to any efforts to regulate them at all. In a public comment in March, OpenAI specifically asked the Trump administration for a voluntary federal framework that overrides state AI laws, seeking 'private sector relief' from a patchwork of state AI bills. Two months later, the House added language to its reconciliation bill that would have done exactly that — and more. The provision to impose a 10 year moratorium on state AI regulations passed the House but is expected to be knocked out by the Senate parliamentarian. (Breaking ranks again, Anthropic is lobbying against the moratorium.) Still, the provision has widespread support amongst Republicans and is likely to make a comeback.
Yahoo
23 minutes ago
- Yahoo
Energy Fuels (UUUU) is Among the Energy Stocks that Gained the Most This Week
The share price of Energy Fuels Inc. (NYSEAMERICAN:UUUU) surged by 10.93% between May 29 and June 5, 2025, putting it among the Energy Stocks that Gained the Most This Week. Let's shed some light on the development. Miners at work in a mine, searching for Uranium and Vanadium. Energy Fuels Inc. (NYSEAMERICAN:UUUU) is a leading US-based critical minerals company, focused on uranium, rare earth elements, heavy mineral sands, vanadium, and medical isotopes. Investors reacted positively this week after Energy Fuels Inc. (NYSEAMERICAN:UUUU) disclosed that it had achieved record monthly uranium production at its Pinyon Plain mine in Arizona, with May's output reaching nearly 260,000 pounds of U3O8. Moreover, the company filed an updated technical report on its Bullfrog project in Utah, significantly increasing previously reported in-ground uranium resources. These developments are especially significant given a recent executive order by President Trump to reinvigorate the American nuclear sector and quadruple the country's nuclear energy capacity. The order also calls for an increase in domestic mining and enrichment of uranium, and a reduction in reliance on imports from Russia and China. While we acknowledge the potential of UUUU as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 10 Cheap Energy Stocks to Buy Now and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data