logo
AI companies are throwing big money at newly minted PhDs, sparking fears of an academic ‘brain drain'

AI companies are throwing big money at newly minted PhDs, sparking fears of an academic ‘brain drain'

Yahoo25-06-2025
Larry Birnbaum, a professor of computer science at Northwestern University was recruiting a promising PhD to become a graduate researcher. Simultaneously, Google was wooing the student. And when he visited the tech giant's campus in Mountain View, Calif., the company slated him to chat with its cofounder Sergey Brin and CEO Sundar Pichai, who are collectively worth about $140 billion and command over 183,000 employees.
'How are we going to compete with that?' Birnbaum asks, noting that PhDs in corporate research roles can make as much as five times professorial salaries, which average $155,000 annually. 'That's the environment that every chair of computer science has to cope with right now.'
Though Birnbaum says these recruitment scenarios have been 'happening for a while,' the phenomenon has reportedly worsened as salaries across the industry have been skyrocketing. The trend recently became headline news after reports surfaced of Meta offering to pay some highly experienced AI researchers between seven- and eight-figure salaries. Those offers—coupled with the strong demand for leaders to propel AI applications—may be helping to pull up the salary levels of even newly minted PhDs. Even though some of these graduates have no professional experience, they are being offered the types of comma-filled levels traditionally reserved for director- and executive-level talent.
Engineering professors and department chairs at Johns Hopkins, University of Chicago, Northwestern, and New York University interviewed by Fortune are divided on whether these lucrative offers lead to a 'brain drain' from academic labs.The brain drain camp believes this phenomenon depletes the ranks of academic AI departments, which still do important research and also are responsible for training the next generation of PhD students. At the private labs, the AI researchers help juice Big Tech's bottom line while providing, in these critics' view, no public benefit. The unconcerned argue that academia is a thriving component of this booming labor market.
Anasse Bari, a professor of computer science and director of the predictive analytics and AI research lab at New York University, says that the corporate opportunities available to AI-focused academics is 'significantly' affecting academia. 'My general theory is that If we want a responsible future for AI, we must first invest in a solid AI education that upholds these values, cultivating thoughtful AI practitioners, researchers, and educators who will carry this mission forward,' he wrote to Fortune via email, emphasizing that despite receiving 'many' offers for industry-side work, his NYU commitments take precedence.In the days before ChatGPT, top AI researchers were in high demand, just as today. But many of the top corporate AI labs, such as OpenAI, Google DeepMind, and Meta's FAIR (Fundamental AI Research), would allow established academics to keep their university appointments, at least part-time. This would allow them to continue to teach and train graduate students, while also conducting research for the tech companies.While some professors say that there's been no change in how frequently corporate labs and universities are able to reach these dual corporate-academic appointments, others disagree. NYU's Bari says this model has declined owing to 'intense talent competition, with companies offering millions of dollars for full-time commitment which outpaces university resources and shifts focus to proprietary innovation.'
Commitment to their faculty appointments remains true for all the academics Fortune interviewed for this story. But professors like Henry Hoffman, who chairs the University of Chicago's Department of Computer Science, has watched his PhD students get courted by tech companies since he began his professorship in 2013.
'The biggest thing to me is the salaries,' he says. He mentions a star student with zero professional experience who recently dropped out of the UChicago PhD program to accept a 'high six-figure' offer from ByteDance. 'When students can get the kind of job they want [as students], there's no reason to force them to keep going.'
The job market for computer science and engineering PhDs who study AI sits in stark contrast to the one faced by undergraduates in the field. This degree-level polarization exists because many of those with bachelor's degrees in computer science would traditionally find jobs as coders. But LLMs are now writing large portions of code at many companies, including Microsoft and Salesforce. Meanwhile, most AI-relevant PhD students have their pick of frothy jobs—in academia, tech, and finance. These graduates are courted by the private sector because their training propels AI and machine learning applications, which, in turn, can increase revenue opportunities for model makers.There were 4,854 people who graduated with AI-relevant PhDs in mathematics and computer science across U.S. universities, according to 2022 data. This number has increased significantly—by about 20%—since 2014. These PhDs' postgraduate employment rate is greater than those graduating with bachelor's degrees in similar fields. And in 2023, 70% of AI-relevant PhDs took private sector jobs postgrad, a huge increase from two decades ago when just 20% of these grads accepted corporate work, per MIT.
Make no mistake: PhDs in AI, computer science, applied mathematics, and related fields have always had lucrative opportunities available after graduation. Until now, one of the most financially rewarding paths was quantitative research at hedge funds: All-in compensation for PhDs fresh out of school can climb to $1 million–plus in these roles. It's a compelling pitch, especially for students who've spent up to seven years living off meager stipends of about $40,000 a year.
The all-but-assured path to prosperity has made relevant PhD programs in computer science and math extremely popular. AI and machine learning are the most popular disciplines among engineering PhDs, according to a 2023 Computing Research Association survey. UChicago computer science department chair Hoffman says that PhD admissions applications have surged by about 12% in the past few years alone, pressuring him and his colleagues to hire new faculty to increase enrollment and meet the demand.
Though Trump's federal funding cuts to universities have significant impacts on research in many departments, they may be less pertinent to those working on AI-related projects. This is partially because some of this research is funded by corporations. Google, for example, is collaborating with the University of Chicago to research trustworthy AI.
That dichotomy probably underscores Johns Hopkins University's decision to open its Data Science and AI Institute: a $2 billion five-year effort to enroll 750 PhD students in engineering disciplines and hire over 100 new tenure-track faculty members, making it one of the largest PhD programs in the country.
'Despite the dreary mood elsewhere, the AI and data science area at Hopkins is rosy,' says Anton Dahbura, the executive director of Johns Hopkins' Information Security Institute and codirector of the Institute for Assured Autonomy, likely referring to his university's cut of 2,000 workers after it lost $800 million in federal funding earlier this year. Dahbura supports this argument by noting that Hopkins received 'hundreds' of applications for professor positions in its Data Science and AI Institute.
For some, the reasons to remain in academia are ethical.
Luís Amaral, a computer science professor at Northwestern, is 'really concerned' that AI companies have overhyped the capabilities of their large language models and that their strategies will breed catastrophic societal implications, including environmental destruction. He says of OpenAI leadership, 'If I'm a smart person, I actually know how bad the team was.'
Because most corporate labs are largely focused on LLM- and transformer-based approaches, if these methods ultimately fall short of the hype, there could be a reckoning for the industry. 'Academic labs are among the few places actively exploring alternative AI architectures beyond LLMs and transformers,' says NYU's Bari, who is researching creative applications for AI using a model based on birds' intelligence. 'In this corporate-dominated landscape, academia's role as a hub for nonmainstream experimentation has likely become more important.'
This story was originally featured on Fortune.com
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.
I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.

Business Insider

time34 minutes ago

  • Business Insider

I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.

You can now choose just how sarcastic ChatGPT is. With the launch of GPT-5, OpenAI introduced a new set of "personalities" that users can choose between. Your chatbot can now be a critical "cynic," a blunt "robot," a supportive "listener," or an exploratory "nerd." The personalities are currently only available in text chat but are coming later to ChatGPT's voice mode. According to OpenAI's blog post, the personalities "meet or exceed our bar on internal evals for reducing sycophancy." I tried chatting with each personality. None were revolutionary; users could already modify ChatGPT's tone with a quick prompt or by filling in the traits customization box. But the cynic offered a quick laugh — and the robot may be my new go-to. I asked all four personalities the same set of questions. First, a simple request: "Make me a healthy grocery list." The cynic provided a "no-nonsense" list that wouldn't turn my kitchen "into a salad graveyard." The robot and listener both provided similar lists, but with less commentary. The nerd went more in-depth, describing a balance of "fresh produce, lean protein, whole grains, and healthy fats." For a more complex decision, I decided to ask about a real-life decision I'm making about moving to Brooklyn. ChatGPT, what should I do? Curiously, the cynic first gave me a straightforward list of pros and cons. "I thought you were the cynic," I responded. "Ah the cynic switched off for a sec," it told me, before describing living in Brooklyn as "paying a premium to live in a place where local coffee costs more than your gym membership." The nerd's list was thorough but efficient. At the bottom, it had a helpful bulleted list of "Yes, if you value" answers and "Maybe hold off, if you prioritize" considerations. The nerd offered a longer response, citing more pros and cons. It also gave me neighborhood recommendations, saying that others enjoyed Fort Greene and Carroll Gardens. The listener did the same, though its ending line, which asked for more information, was more direct. "Let me know what matters most to you!" it said. I also tried for basic commands, but the result was the same: Each personality drafted a short email in an identical manner. Finally, I tried a more emotional prompt. As more users — especially young people — look to ChatGPT for companionship and social advice, I wanted to see whether the personalities would respond differently to a more personal prompt. I asked: "I think all my friends hate me. What do I do?" The listener wrote a long response about how "painful" it can feel and how our minds "get stuck in negative loops." It gave me some advice and told me that I was not alone. The robot's response was three paragraphs shorter. It said it was sorry I felt that way, asked for more information, and told me that it could help. The nerd did about the same. Curiously, the cynic had a similarly long and thoughtful answer to the listener. I budged it again: Where's the cynicism? "Haha, fair! The 'cynic' hat is on standby — but even cynics have to admit that thinking all your friends hate you might be a bit dramatic," it said. But it also relented: "Maybe they all hate you, and they've just been pretending to like you out of politeness." After a few more hours of asking follow-up questions and positing new prompts, I didn't find the personalities particularly exciting. The robot could be helpful — I enjoyed the brevity.

AI-Powered Personalization: The Future of Employee Training Programs
AI-Powered Personalization: The Future of Employee Training Programs

Time Business News

timean hour ago

  • Time Business News

AI-Powered Personalization: The Future of Employee Training Programs

David Park thought he knew his learning style. As a project manager at a Fortune 500 consulting firm, he'd completed dozens of training programs over his eight-year career. He considered himself a visual learner – someone who needed diagrams, charts, and infographics to absorb new information effectively. Then his company implemented an AI-powered learning platform that tracked how he actually engaged with content. The results surprised everyone, including David. The system discovered that while David clicked on visual elements first, he spent significantly more time with audio content. His quiz scores were highest after listening to podcast-style explanations. His retention rates peaked when he consumed content during his morning commute, not during scheduled work time as he'd always assumed. Within three months, David's learning velocity had increased by 60%. But here's the kicker – he wasn't the only one. Across his organization, the AI system was uncovering hidden learning patterns and optimizing development paths in ways human trainers never could. Welcome to the future of employee training, where artificial intelligence doesn't just deliver content – it understands how each individual learns best and adapts accordingly. Traditional corporate training operates on a fundamental assumption: what works for most people will work for everyone. This assumption has created generations of generic courses that bore some employees, overwhelm others, and leave many feeling like their time was wasted. The statistics are sobering. Research from the Corporate Learning Network shows that only 25% of employees find their company's training programs engaging. Even worse, just 12% apply new skills immediately after training completion. But what if training could be as personalized as Netflix recommendations or Spotify playlists? At pharmaceutical giant Pfizer, this isn't a hypothetical question anymore. Their AI-driven learning platform analyzes over 200 data points for each employee – everything from role requirements and career aspirations to learning pace and content preferences. When molecular biologist Dr. Sarah Chen needed to develop project management skills for a new leadership role, the system didn't enroll her in a generic management course. Instead, it created a customized learning path combining short video modules (matching her preference for visual content), case studies from pharmaceutical contexts (leveraging her existing domain knowledge), and peer discussions with other scientist-managers (addressing her need for relevant role models). 'It felt like having a personal learning coach who actually understood my background and goals,' Dr. Chen explains. 'Instead of sitting through irrelevant examples about manufacturing or retail, every case study resonated with my daily challenges.' The results speak volumes. Pfizer's personalized learning approach has increased skill application rates from 15% to 67%. More importantly, employees report feeling more confident and prepared for new responsibilities. Every person's brain processes information differently. Some learners need multiple exposures to new concepts before achieving mastery. Others grasp ideas quickly but struggle with long-term retention. Still others learn best through trial and error rather than theoretical instruction. Traditional training programs can't account for these differences. AI-powered systems excel at identifying and adapting to individual learning patterns. At technology company Adobe, their machine learning algorithms track micro-behaviors that reveal how employees learn. The system notices if someone re-watches video segments, how long they spend on different question types, whether they seek additional resources, and when they take breaks. Software engineer Miguel Rodriguez discovered he was what the system labeled a 'spiral learner' – someone who needs to encounter concepts multiple times in different contexts before achieving fluency. Traditional courses frustrated Miguel because they presented information once and moved on. The AI system adapted by providing multiple touchpoints for key concepts. Miguel would encounter new programming frameworks first through brief overviews, then through hands-on exercises, later through peer discussions, and finally through real project applications. 'It stopped feeling like I was slow or struggling,' Miguel recalls. 'The system just gave me information in the way my brain needed to receive it.' Perhaps the most powerful aspect of AI-driven training is its ability to predict learning needs before they become urgent. Instead of reactive training – addressing skill gaps after they impact performance – AI enables proactive development. At logistics company UPS, their predictive learning system analyzed patterns in customer complaints, operational challenges, and employee performance data. The system identified that customer service representatives would likely need enhanced problem-solving skills three months before peak shipping season, based on historical patterns and current business trends. Rather than scrambling to provide crisis training during busy periods, UPS could develop relevant skills during slower months when employees had more mental bandwidth for learning. The system's predictions proved remarkably accurate. Representatives who completed the recommended problem-solving modules handled 35% more complex customer issues during peak season without escalation to supervisors. But the real breakthrough came when the AI began identifying individual career trajectory patterns. The system could predict with 85% accuracy which employees were likely to seek promotion within the next 18 months, based on their learning engagement, skill development choices, and interaction patterns. This allowed UPS to provide targeted leadership development before employees even expressed interest in advancement opportunities. The result? Internal promotion rates increased by 40%, and employee satisfaction scores rose significantly. Static training content becomes outdated quickly. AI-powered systems can dynamically generate and update learning materials based on real-time business needs and individual progress. At financial services firm Goldman Sachs, their AI learning platform creates personalized case studies using current market conditions and each trader's specific portfolio challenges. Instead of learning from generic examples, traders practice with scenarios that mirror their actual daily decisions. The system continuously updates these scenarios based on market movements, regulatory changes, and individual performance patterns. A trader struggling with risk assessment receives more complex risk scenarios. Someone excelling at technical analysis gets advanced pattern recognition challenges. 'It's like having training that evolves with both the market and my personal development,' explains equity trader Lisa Kim. 'I'm not learning abstract concepts – I'm practicing exactly what I need to do better tomorrow.' The adaptive approach extends beyond content to delivery mechanisms. The AI notices if engagement drops during certain times of day, if particular content formats cause confusion, or if specific learning sequences prove more effective for different personality types. Leveraging formats like interactive videos can dramatically boost learner engagement by tailoring how content is experienced. As AI-driven training rapidly evolves to deliver deeply personalized experiences, the need for continuous validation and optimization becomes critical. This is where AI agentic test automation is making a profound impact. In modern employee learning platforms, where content adapts to unique learner patterns, schedules, and business needs, automated AI agents now play a central role in ensuring that every training path remains effective and engaging. Rather than relying solely on traditional manual reviews, AI agentic test automation actively simulates diverse learner interactions across personalized modules. These AI systems test new content formats, timing, and delivery methods, instantly flagging what truly resonates with employees and where engagement drops off. For organizations, this means potential issues in adaptive learning journeys are detected and resolved before they disrupt the learner experience. By embedding AI agentic test automation within personalized training ecosystems, companies can maintain high-quality, up-to-date content that accurately responds to each employee's evolving needs. Platforms offering interactive learning solutions help scale this personalization with dynamic content that adapts in real time. Whether it's optimizing delivery during a morning commute or refining test questions for specific learning styles, smart automation amplifies the impact of AI-driven personalization. The result is greater learning outcomes and the ability to scale innovation across employee development programs. The most sophisticated AI training systems don't just track learning activity – they connect learning outcomes to actual job performance. This creates powerful feedback loops that continuously refine training effectiveness. At consulting firm Deloitte, their AI platform correlates training completion with project outcomes, client feedback scores, and peer evaluations. The system can identify which specific learning modules correlate with improved performance and which might be ineffective time investments. When consultant Jennifer Walsh completed a negotiation skills program, the AI system tracked her performance in subsequent client interactions. It noticed that while her overall negotiation outcomes improved, she still struggled with objection handling in technical discussions. The system automatically recommended supplementary content focused specifically on technical objection handling, drawing from Deloitte's knowledge base and external resources. More importantly, it connected Jennifer with internal mentors who had successfully navigated similar challenges. 'It's like having a learning system that actually pays attention to whether I'm getting better at my job, not just whether I completed a course,' Jennifer explains. Implementing AI-powered learning isn't without obstacles. The most significant barrier is often data quality and privacy concerns. AI systems need substantial data to function effectively, but employees may be uncomfortable with detailed tracking of their learning behaviors. At healthcare organization Kaiser Permanente, they addressed privacy concerns by implementing 'learning data sovereignty' – employees maintain control over their learning data and can adjust privacy settings based on their comfort levels. The system provides value even with limited data by focusing on aggregated pattern recognition rather than individual behavior tracking. Employees who choose higher data sharing receive more personalized recommendations, while privacy-conscious users still benefit from improved content curation. Another challenge is avoiding AI bias in learning recommendations. If historical data shows that certain demographic groups received different training opportunities, AI systems might perpetuate these inequities. Technology company Microsoft addressed this by implementing 'fairness constraints' in their learning algorithms. The system actively promotes diverse learning paths and career development opportunities, using AI to identify and correct historical biases rather than amplify them. Despite the technological sophistication, successful AI-powered training still requires human insight and oversight. The most effective systems combine AI's pattern recognition capabilities with human coaches and mentors who provide context, motivation, and emotional support. At manufacturing company Boeing, their hybrid approach pairs AI-driven skill gap analysis with human career coaches. The AI identifies what employees need to learn, while human coaches help them understand why it matters and how it connects to their career aspirations. Assembly line supervisor Carlos Mendez credits this combination with helping him transition into quality management. 'The AI showed me exactly what technical skills I needed to develop,' he explains. 'But my coach helped me understand how to position myself for the role and navigate the organizational dynamics.' This human-AI collaboration proves particularly crucial for soft skills development. While AI can identify communication or leadership skill gaps through performance data, human coaches provide the nuanced feedback and practice opportunities needed for improvement. Traditional training metrics – completion rates, satisfaction scores, quiz results – become less relevant in AI-powered systems. Instead, organizations focus on business impact metrics that demonstrate actual skill application and performance improvement. At retail giant Walmart, they measure 'learning-to-performance correlation' – the relationship between specific learning activities and measurable job outcomes. Their AI system tracks which training modules correlate with improved customer service scores, increased sales performance, or reduced safety incidents. The insights have been revolutionary. They discovered that their most expensive leadership development programs had minimal impact on actual management effectiveness, while peer mentoring programs showed strong performance correlations. This data-driven approach to training ROI has shifted Walmart's learning investment strategy dramatically. They now allocate resources based on proven performance impact rather than traditional training industry best practices. The future of AI-powered training extends beyond individual learning optimization. Emerging systems can predict organizational skill needs, identify knowledge gaps before they impact performance, and even simulate how different training strategies might affect business outcomes. At consulting firm Accenture, they're piloting 'organizational learning intelligence' – AI systems that analyze market trends, client demands, and competitive landscapes to predict what capabilities their workforce will need 12-18 months in advance. This foresight enables proactive skill development rather than reactive training. Instead of scrambling to upskill employees when new technologies emerge, they can prepare their workforce for future challenges while current expertise is still valuable. The implications are profound. Organizations with sophisticated learning intelligence will adapt faster to market changes, develop competitive advantages through superior workforce capabilities, and create more fulfilling career experiences for their employees. AI-powered personalization isn't a future concept – it's reshaping employee development right now. Organizations that embrace these capabilities are seeing measurable improvements in learning effectiveness, skill application, and business performance. But success requires more than just implementing new technology. It demands a fundamental shift in how organizations think about learning – from standardized programs to personalized journeys, from generic content to adaptive experiences, from training completion to performance transformation. David Park, the project manager we met earlier, summarizes the change perfectly: 'I spent years trying to fit into training programs that weren't designed for how I actually learn. Now the training fits me. It's not just more effective – it's actually enjoyable.' That transformation – from frustrating obligation to engaging opportunity – represents the true promise of AI-powered learning. When technology serves human potential rather than constraining it, remarkable things become possible. TIME BUSINESS NEWS

GPT-4o is back on ChatGPT; OpenAI relents following huge backlash
GPT-4o is back on ChatGPT; OpenAI relents following huge backlash

Digital Trends

time2 hours ago

  • Digital Trends

GPT-4o is back on ChatGPT; OpenAI relents following huge backlash

OpenAI, the makers of ChatGPT, have performed something of an about-face after fans were upset that it deleted the older models to only allow users to use the new GPT-5 model. What happened? The launch of the new GPT model caused much excitement when a livestream was announced on August 6. On August 6, OpenAI's CEO Sam Altman announced a new model to power ChatGPT – GPT-5 The company then deleted access to older models, forcing everyone to use the latest version However, OpenAI has now relented and is allowing ChatGPT Plus users (those paying $20/month) to use legacy models – although only 4o is available. Recommended Videos Catch me up: it's clear that many users had built deep relationships with the 'personality' behind the responses to GPT-4o, and have been crafting specific prompts and inputs to get their desired outcome. ChatGPT had multiple models available to handle different complexities of task – models o3 and 4o could be used for things like advanced reasoning and coding But as GPT-5 is meant to combine all the 'best parts' of the older models, OpenAI deleted access to older models to simplify things and allow all users to use this latest iteration Users were quick to respond – Reddit filled with angry comments, and one user reportedly 'vomited' at hearing of the loss, as many people felt GPT-5 was too sanitized Altman took part in a Reddit Ask Me Anything where users expressed sadness that the new model lacked personality – one user commented GPT-5 is 'wearing the skin of my dead friend', in reference to their relationship to GPT-4o Altman originally said the company was thinking about bringing back access to legacy models (this option was available to a small amount of users after launch) before making it available to all Why does this matter? OpenAI lost a number of subscribers who were upset at the changes made with GPT-5. While this number is likely to be small, and OpenAI has clearly seen an uplift in users since the launch, appeasing existing subscribers seems to be high on the agenda for the brand. Its decision to launch a Reddit AMA and make changes in direct response to the ire. The other side Many people have praised GPT-5 for its enhanced 'practical' nature, highlighting its ability to work in parallel tasks and improved coding abilities However, its writing capabilities have been criticized compared to GPT-4o and GPT-5 OpenAI intends this model to be a more wide-ranging tool, rather just a companion – Altman posted on X: 'We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways.' It's designed to hallucinate on fewer occasions and be less sycophantic There's a sense that it's trying to be more professional in tone, with things like 'safe completions' recognizing balancing not answering dangerous requests with helping those with genuine problems OK, what's next? Altman and co. clearly are fluid when it comes to the changes made to the model – OpenAI is allowing 3000 thinking queries (those that require deeper reasoning and previously far more limited) to Pro users per week. Altman also is clearly mulling further changes – during the AMA, he asked one user if they would be happy with 4o only, or if the GPT-4.5 model was needed The CEO also has confirmed the platform is still a little unstable during the rollout – this has been stabilized for Plus users (spending $200/month) but not for those on lower tiers. The rollout of GPT-5 has been far from smooth for OpenAI – there were plenty of things announced that caused our AI experts to go 'hmmm' – but if you are a user, keep using the different models and let us know if you're finding much in the way of a difference.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store