
The AI Talent Crisis: Why Companies Are Looking In The Wrong Places
Nicole Zheng is the Chief Business Officer at A.Team, which powers companies with elite tech talent and ready-to-deploy AI solutions.
Last year, generative AI officially entered Gartner's famed 'Trough of Disillusionment'—a rite of passage for any emerging technology. While most enterprise companies are eager to embrace AI, with some saving hundreds of millions of dollars through strategic use cases, most fail to progress past the pilot phase. And one of the main barriers for many corporate leaders is talent.
"The biggest AI talent shortage has created a $134.8 billion market opportunity," said João (Joe) Moura, CEO of crewAI. "Many focus solely on technical roles, but overlook a crucial segment: Traditional approaches emphasize Deep ML engineering or Ph.D.s," which implies barriers for talent lacking those qualifications.
It's a challenge my company has heard repeatedly when consulting with enterprise leaders on their generative AI strategy, and we wanted to understand those challenges more deeply. So, we surveyed 250 senior tech leaders responsible for AI initiatives. Our research confirmed those challenges: "While 96% of tech leaders plan to increase AI investments in 2025, only 36% have successfully deployed AI to production." The gap isn't due to a lack of enthusiasm or budget—it's a talent problem. In fact, 85% of tech leaders have delayed critical AI initiatives due to talent shortages.
But I believe there's another reason for this talent crisis: Most companies are looking for AI talent in all the wrong places, using outdated hiring approaches that fundamentally clash with the pace of AI innovation.
Why Traditional Hiring Is Hindering, Not Helping
When we asked these senior AI leaders how long it takes to hire top product and engineering talent, the answers were eye-opening: 67% said "it takes 4+ months to hire top engineering talent."
This hiring timeline is completely mismatched with the whiplash pace of AI development; the technology is evolving every few weeks, bringing new opportunities to drive value for customers and employees alike. By the time most organizations have hired the talent they initially sought, that person's skills may already be outdated. It's like hiring horses while the Model T is accelerating in the fast lane.
Adopting The 'T-Shaped' Talent Model
One way organizations can succeed is by taking a different approach: a 'T-shaped' model that integrates specialized freelance talent with a smaller, core team of full-time employees. Picture the letter T: Your vertical stem is made up of core, full-time engineers who know your systems inside and out. The horizontal bar is made up of specialized contractors who bring targeted expertise where you need it.
Why does this approach work? As GenAI is still nascent, your full-time engineers carry the burden of maintaining your core platform while learning the latest agentic system design techniques and tools. It can be next to impossible to succeed with the landscape evolving daily. By contrast, independent AI/ML engineers can bring experience from dozens of deployments across different companies—often choosing independence precisely for this opportunity. You're not just hiring skills; you're hiring the experience gained from rapid innovation cycles.
Factors That Contribute To T-Shaped Model Effectiveness
1. The Shift In The Technical Talent Market: The freelance economy has evolved dramatically from its gig-economy origins. Today's freelance talent marketplace includes elite, specialized professionals who've chosen to go independent thanks to the autonomy, flexibility and financial upside that it can provide. For top AI talent, a big part of the appeal is getting to choose the specific projects and problems that interest them the most, instead of being put in a 'golden cage' and paid to literally do nothing.
2. The Innovation And Deployment Boost: Fractional talent can boost innovation capabilities and accelerate your road map. Look for freelance AI specialists who have experience with deployments at multiple companies; they should have more knowledge of the pitfalls that lurk around the corner and be able to help your company avoid them.
3. Cost Efficiency: By bringing in specialized fractional talent for the specific stages of development that you need them for, you can deploy your capital and resources more effectively.
4. A Startup Mindset: I've found that for most organizations—particularly those in non-tech industries—GenAI requires a monumental mindset shift. While IT teams excel at buying and integrating software for systems of record, capitalizing on GenAI's full capabilities often means adopting a mindset of leveraging your company's data and automating human workflows to realize new value. Choosing team members who can contribute out-of-the-box thinking as well as AI engineering expertise can help you achieve this mindset.
5. Flexibility: This model can also help solve flexibility challenges. Instead of trying to hire full-time employees for specific technical needs, focus on building fluid teams that can adapt as quickly as the technology.
The Importance Of Collaboration
A common pitfall in attempting the T-shaped model is to treat independent talent as outsiders, but this isn't outsourcing. The trick is to embed specialists into your existing teams, having them code and participate in standups right alongside your core engineers. In my experience, this approach can deliver more than project outcomes; it can also allow your core team to upskill through direct collaboration.
Conclusion
The AI talent crisis may seem daunting, but it's far from insurmountable. You don't need to compete with Netflix to hire that million-dollar AI engineer; instead, consider adopting a more flexible approach to how you build your teams.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
21 minutes ago
- Yahoo
OpenAI turns to Google's AI chips to power its products, source says
(Reuters) -OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.


Forbes
24 minutes ago
- Forbes
AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do
At the AMD Advancing AI event, CEO Lisa Su touted the company's AI compute portfolio. At the AMD Advancing AI event in San Jose earlier this month, CEO Lisa Su and her staff showcased the company's progress across many different facets of AI. They had plenty to announce in both hardware and software, including significant performance gains for GPUs, ongoing advances in the ROCm development platform and the forthcoming introduction of rack-scale infrastructure. There were also many references to trust and strong relationships with customers and partners, which I liked, and a lot of emphasis on open hardware and an open development ecosystem, which I think is less of a clear winner for AMD, as I'll explain later. Overall, I think the event was important for showing how AMD is moving the ball down the field for customers and developers. Under Su, AMD's M.O. is to have clear, ambitious plans and execute against them. Her 'say/do' ratio is high. The company does what it says it will do. This is exactly what it must continue doing to whittle away at Nvidia's dominance in the datacenter AI GPU market. What I saw at the Advancing AI event raised my confidence from last year — although there are a few gaps that need to be addressed. (Note: AMD is an advisory client of my firm, Moor Insights & Strategy.) AMD's AI Market Opportunity And Full-Stack Strategy When she took the stage, Su established the context for AMD's announcements by describing the staggering growth that is the backdrop for today's AI chip market. Just take a look at the chart below. So far, AMD's bullish projections for the growth of the AI chip market have turned out to be ... More accurate. So this segment of the chip industry is looking at a TAM of half a trillion dollars by 2028, with the whole AI accelerator market increasing at a 60% CAGR. The AI inference sub-segment — where AMD competes on better footing with Nvidia — is enjoying an 80% CAGR. People thought that the market numbers AMD cited last year were too high, but not so. This is the world we're living in. For the record, I never doubted the TAM numbers last year. AMD is carving out a bigger place in this world for itself. As Su pointed out, its Instinct GPUs are used by seven of the 10 largest AI companies, and they drive AI for Microsoft Office, Facebook, Zoom, Netflix, Uber, Salesforce and SAP. Its EPYC server CPUs continue to put up record market share (40% last quarter), and it has built out a full stack — partly through smart acquisitions — to support its AI ambitions. I would point in particular to the ZT Systems acquisition and the introduction of the Pensando DPU and the Pollara NIC. GPUs are at the heart of datacenter AI, and AMD's new MI350 series was in the spotlight at this event. Although these chips were slated to ship in Q3, Su said that production shipments had in fact started earlier in June, with partners on track to launch platforms and public cloud instances in Q3. There were cheers from the crowd when they heard that the MI350 delivers a 4x performance improvement over the prior generation. AMD says that its high-end MI355X GPU outperforms the Nvidia B200 to the tune of 1.6x memory, 2.2x compute throughput and 40% more tokens per dollar. (Testing by my company Signal65 showed that the MI355X running DeepSeek-R1 produced up to 1.5x higher throughput than the B200.) To put it in a different perspective, a single MI355X can run a 520-billion-parameter model. And I wasn't surprised when Su and others onstage looked ahead to even better performance — maybe 10x better — projected for the MI400 series and beyond. That puts us into the dreamland of an individual GPU running a trillion-parameter model. By the way, AMD has not forgotten for one second that it is a CPU company. The EPYC Venice processor scheduled to hit the market in 2026 should be better at absolutely everything — 256 high-performance cores, 70% more compute performance than the current generation and so on. EPYC's rapid gains in datacenter market share over the past few years are no accident, and at this point all the company needs to do for CPUs is hold steady on its current up-and-to-the-right trajectory. I am hopeful that Signal65 will get a crack at testing the claims the company made at the event. This level of performance is needed in the era of agentic AI and a landscape of many competing and complementary AI models. Su predicts — and I agree — that there will be hundreds of thousands of specialized AI models in the coming years. This is specifically true for enterprises that will have smaller models focused on areas like CRM, ERP, SCM, HCM, legal, finance and so on. To support this, AMD talked at the event about its plan to sustain an annual cadence of Instinct accelerators, adding a new generation every year. Easy to say, hard to do — though, again, AMD has a high say/do ratio these days. AMD's 2026 Rack-Scale Platform And Current Software Advances On the hardware side, the biggest announcement was the forthcoming Helios rack-scale GPU product that AMD plans to deliver in 2026. This is a big deal, and I want to emphasize how difficult it is to bring together high-performing CPUs (EPYC Venice), GPUs (MI400) and networking chips (next-gen Pensando Vulcano NICs) in a liquid-cooled rack. It's also an excellent way to take on Nvidia, which makes a mint off of its own rack-scale offerings for AI. At the event, Su said she believes that Helios will be the new industry standard when it launches next year (and cited a string of specs and performance numbers to back that up). It's good to see AMD provide a roadmap this far out, but it also had to after Nvidia did at the GTC event earlier this year. On the software side, Vamsi Boppana, senior vice president of the Artificial Intelligence Group at AMD, started off by announcing the arrival of ROCm 7, the latest version of the company's open source software platform for GPUs. Again, big improvements come with each generation — in this case, a 3.5x gain in inference performance compared to ROCm 6. Boppana stressed the very high cadence of updates for AMD software, with new features being released every two weeks. He also talked about the benefits of distributed inference, which allows the two steps of inference to be tasked to separate GPU pools, further speeding up the process. Finally, he announced — to a chorus of cheers — the AMD Developer Cloud, which makes AMD GPUs accessible from anywhere so developers can use them to test-drive their ideas. Last year, Meta had kind things to say about ROCm, and I was impressed because Meta is the hardest 'grader' next to Microsoft. This year, I heard companies talking about both training and inference, and again I'm impressed. (More on that below.) It was also great getting some time with Anush Elangovan, vice president for AI software at AMD, for a video I shot with him. Elangovan is very hardcore, which is exactly what AMD needs. Real grinders. Nightly code drops. What's Working Well For AMD in AI So that's (most of) what was new at AMD Advancing AI. In the next three sections, I want to talk about the good, the needs-improvement and the yet-to-be-determined aspects of what I heard during the event. Let's start with the good things that jumped out at me. What Didn't Work For Me At Advancing AI While overall I thought Advancing AI was a win for AMD, there were two areas where I thought the company missed the mark — one by omission, one by commission. The Jury Is Out On Some Elements Of AMD's AI Strategy In some areas, I suspect that AMD is doing okay or will be doing okay soon — but I'm just not sure. I can't imagine that any of the following items has completely escaped AMD's attention, but I would recommend that the company address them candidly so that customers know what to expect and can maintain high confidence in what AMD is delivering. What Comes Next In AMD's AI Development It is very difficult to engineer cutting-edge semiconductors — let alone rack-scale systems and all the attendant software — on the steady cadence that AMD is maintaining. So kudos to Su and everyone else at the company who's making that happen. But my confidence (and Wall Street's) would rise if AMD provided more granularity about what it's doing, starting with datacenter GPU forecasts. Clearly, AMD doesn't need to compete with Nvidia on every single thing to be successful. But it would be well served to fill in some of the gaps in its story to better speak to the comprehensive ecosystem it's creating. Having spent plenty of time working inside companies on both the OEM and semiconductor sides, I do understand the difficulties AMD faces in providing that kind of clarity. The process of landing design wins can be lumpy, and a few of the non-AMD speakers at Advancing AI mentioned that the company is engaged in the 'bake-offs' that are inevitable in that process. Meanwhile, we're left to wonder what might be holding things back, other than AMD's institutional conservatism — the healthy reticence of engineers not to make any claims until they're sure of the win. That said, with Nvidia's B200s sold out for the next year, you'd think that AMD should be able to sell every wafer it makes, right? So are AMD's yields not good enough yet? Or are hyperscalers having their own problems scaling and deploying? Is there some other gating item? I'd love to know. Please don't take any of my questions the wrong way, because AMD is doing some amazing things, and I walked away from the Advancing AI event impressed with the company's progress. At the show, Su was forthright about describing the pace of this AI revolution we're living in — 'unlike anything we've seen in modern computing, anything we've seen in our careers, and frankly, anything we've seen in our lifetime.' I'll keep looking for answers to my nagging questions, and I'm eager to see how the competition between AMD and Nvidia plays out over the next two years and beyond. Meanwhile, AMD moved down the field at its event, and I look forward to seeing where it is headed.
Yahoo
30 minutes ago
- Yahoo
CoreWeave, Inc. (CRWV) Is One Of The Most Bullish Things In My Career, Says Jim Cramer
CoreWeave, Inc. (NASDAQ:CRWV) is one of the . CoreWeave, Inc. (NASDAQ:CRWV) is an AI infrastructure company that provides businesses with hardware to let them run their AI applications. It is one of the few pure-play firms of its kind and the shares have gained a whopping 298% since their IPO in March. Soon after the IPO, Cramer dismissed news reports that CoreWeave, Inc. (NASDAQ:CRWV) had been created specially by NVIDIA to create demand for AI GPUs. This time around, he commented on the strong share price performance which indicated that even Cramer hadn't expected CoreWeave, Inc. (NASDAQ:CRWV)'s shares to perform the way that they did: 'Look can I just say that these are some of the most bullish things I've seen in my career? That CoreWeave could have been priced at 40 and it went to 178. That this Circle just keeps being bought, that Palantir keeps being bought. That a Broadom is going, that Goldman is going. . .' Recently, the CNBC host discussed some of the drivers of CoreWeave, Inc. (NASDAQ:CRWV)'s share price performance: 'Take CoreWeave. This is the company that came public at $40 a share, [a] company I recommended and pushed incredibly hard to. People didn't believe me. Many chose to bet against the stock. 32% of the shares are sold short. On Friday, CoreWeave hit a high of $187. Stock still sits at $172 and change. Sure, it reported a great quarter, but a lot of this move is because so many people were betting against it because the company picked a bad time to come public. I had tremendous conviction that CoreWeave would make a big move, but not this big. Again, I think discipline must trump conviction, and you gotta do some selling here. Well, I still like the stock. I recognize that much of this move was powered by panicked short sellers. Take something off the table, please.' While we acknowledge the potential of CRWV as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data