
Who does AI help the most?
One of the big questions about genAI's implication for work is whether it gives a greater performance boost to less experienced, lower-performing workers or more experienced, higher-performing workers. Research skews toward the former, but a few studies have found that access to genAI tools like ChatGPT help high performers more.
One of the most well-known studies in the second camp has now been discredited. The paper—which was widely covered in media outlets, including Charter —found, among other things, that a genAI tool that helped material scientists discover new materials helped high-performing scientists significantly more than it helped their lower-performing peers.
The author of the paper is no longer at MIT, and the university said it now 'has no confidence in the veracity of the research contained in the paper.'
This came as a shock to us, as that research was important to our understanding of how AI may impact work and inequality. Given its retraction, we wanted to synthesize what the other current research says about this question of whether AI will widen or narrow performance gaps between workers.
The question matters for understanding whether AI will increase inequality, explains Rembrand Koning, an associate professor at Harvard Business School, who co-authored a paper on this topic. But, he adds, it also has important implications for how AI might impact less experienced workers. 'If [AI] helps people with more experience, [who] tend to be the people who can do things better, it might take out the lowest rungs of the labor market because we think AI can have such large effects.' Longer term, Koning adds, 'there may be policy solutions or organizational changes that can get around some of these problems…we need to know the facts to start understanding how the economy might respond.'
Most studies that look into this question find that genAI has an equalizing effect, helping less experienced, lower-performing workers more than their more experienced, higher-performing peers. With the retraction of the material sciences paper, I know of only a few papers that have found the opposite to be true.
The AI-as-an-equalizer result has shown up in studies of everything from customer-support work to writing and consulting tasks. There are two important aspects of these studies to keep in mind when applying their results to other settings. First, the tasks in question are areas where we know AI already performs very well. Second, in these studies, what the genAI tool produces is often relatively close to—or, in some cases, is—the final product. So the mechanism through which AI levels the playing field in these studies is essentially by doing much, sometimes most, of the work.
The AI-widens-performance-gaps result has shown up in a few papers, including one that looked at college students in a debate competition and another that studied entrepreneurs in Kenya. Both studies suggest judgment plays a crucial role in whether or not someone benefits from AI. In the Kenya context, for example, the AI tool provided a range of advice to entrepreneurs and it was up to them to decide what to do with it. 'Those who have the judgment do better because they're able to be like, 'Yeah, these 10 pieces of advice—six are terrible, two are neutral, two are actually really good. Let me focus on the two that can actually help my business,'' explains Harvard Business School's Koning, one of the study's co-authors.
So, will AI widen or narrow performance gaps?
My view is that it depends on a job's level of autonomy. If you're in a job where you help set your goals, determine how to reach those goals, and make many decisions throughout your day, AI likely helps you more if you're a high performer with strong judgment. If, however, you're in a role that's more constrained, requires less decision-making, and performs a pre-defined set of tasks that AI can do well, then AI likely narrows the performance gap.
'The way I think of that is when you start your day and you're looking at the possible number of paths you can go down…how many paths could I possibly choose? And then how different are the end destinations of those paths?' explains Ben Weidmann, director of research at the Harvard Skills Lab and co-author of a related paper. 'My hypothesis, which is very similar to yours…is that jobs where there are more paths are going to disproportionately have the AI gains go to people who are high-skilled' in their decision-making capabilities, says Weidmann.
'The other way to think about it is [whether] the AI [is] producing the inputs or the outputs for your job,' says Koning. In many of the studies that find that AI narrows performance gaps, the AI's output becomes the worker's output, with varying degrees of editing. Koning points out that in the debate and entrepreneurship studies, the AI's answers were the inputs for the tasks of debating an opponent or making critical business decisions and implementing them.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 minutes ago
- Yahoo
Job&Talent scales its AI squad—unlocking up to 30% productivity gains in frontline industries
Job&Talent AI Agents Job&Talent's headquarters MADRID and ATLANTA, Aug. 20, 2025 (GLOBE NEWSWIRE) -- Job&Talent, a world-leading workforce platform for essential industries, is scaling its suite of AI agents to help frontline businesses significantly boost productivity. Building on the success of Clara, its AI recruiter, the company has introduced three new AI-agents—an AI account manager, AI attendance coach, and AI performance coach—designed to improve critical frontline metrics such as fulfillment, attendance, worker quality, and retention—all of which, if left unaddressed, erode productivity and inflate workforce costs. With Clara, Sara, Teo, and Maria, Job&Talent isn't just adopting AI—it's redefining how frontline work is managed. This transformation is powered by the company's purpose-built, end-to-end platform and deep operational know-how, drawn from over 300,000 worker placements each year. Clara has already transformed high-volume hiringHiring in frontline industries is slow, manual, and often fails to meet the speed and scale businesses need—especially during peak periods. Clara, Job&Talent's first AI agent launched in late 2024, was built to change that. Designed specifically for high-volume hiring, Clara autonomously interviews and vets candidates 24/7 in multiple languages, often outside office hours when workers are most reachable. Since launch, she has conducted 190,000+ interviews and delivered 22,000+ hires—operating 65% faster than manual recruiting, while earning 90% satisfaction scores from candidates. Clara has become a critical tool for companies facing seasonal surges, like Black Friday, where she helped one employer reduce time-to-hire for thousands of workers from 8 weeks to just 3. Sara prevents no-shows and keeps shifts on trackHigh absenteeism is one of the most persistent challenges in frontline operations. Sara, Job&Talent's AI attendance coach, tackles this head-on by detecting no-shows in real time, calling absent workers within 10 minutes, capturing reasons for absence, and delivering tailored coaching to change future behavior. By reinforcing clock-in compliance and embedding new attendance habits at scale, Sara has driven a 30% reduction in absenteeism and helped companies reach attendance rates of up to 90% on average—improving reliability across even the most complex shift environments. Teo spots staffing gaps instantly before they hurt productivityIn fast-paced frontline operations, even small gaps in shift coverage can disrupt productivity, delay output, and overload teams—yet supervisors often lack the real-time visibility or tools to react quickly. Teo is Job&Talent's 24/7 AI account manager—a conversational genAI chatbot embedded in the company app that proactively monitors workforce data and flags issues before they escalate. Supervisors can simply ask, 'Who is absent today?', 'Show me open shifts,' or 'Which shifts are at risk?' and Teo will instantly surface insights and suggest next steps. Beyond insights, Teo can also take action: automatically inviting backup workers, updating shift plans, or triggering callouts to fill last-minute gaps. With Teo, local teams gain a real-time planning assistant that helps them stay ahead of staffing issues. Maria turns feedback into measurable performance gainsFrontline performance depends on timely, constructive feedback—yet many issues go unaddressed until it's too late. Maria, Job&Talent's AI performance coach, detects early signs of underperformance and proactively coaches workers in positive, confidence-building conversations. Using real-time supervisor feedback and worker ratings, Maria delivers clear and personalized guidance on strengths, areas for improvement, and next steps. Each call is summarised for both worker and supervisor via the app, creating a transparent feedback loop. Maria is already improving workforce quality and boosting productivity by turning feedback into real behavioral change—at scale. Juan Urdiales, Co‑CEO and Co‑Founder of Job&Talent, said: 'Our platform is designed with one goal: help companies boost productivity and efficiency at scale, while providing workers with the opportunities and tools they need to thrive. By combining deep industry knowledge with AI and automation, we are making companies more productive by filling shifts faster, reducing no‑shows, improving worker quality and retaining talent—delivering measurable impact across the entire employment journey.' This is only the beginning: more AI applications are already in development to support every stage of the employment journey. MEDIA CONTACTRobin NierynckDirector, Global Communicationspress@ ABOUT JOB&TALENTJob&Talent is a world-leading, AI-powered workforce platform for essential industries. We help companies boost productivity and efficiency at scale, while providing workers with the opportunities and tools they need to thrive. Our mission is simple: to empower the people who make the world go round. Built on deep industry expertise, cutting-edge technology, and smart AI agents, our end-to-end platform covers the entire workforce lifecycle — from recruitment and planning to time and attendance, performance, cost management, and communication. It delivers measurable improvements in the areas that matter most: fulfilment, attendance, retention, and workforce quality. In 2024, Job&Talent placed over 300,000 workers across more than 3,250 companies in sectors such as logistics, manufacturing, and retail. Headquartered in Madrid, the company operates in 10 countries across Europe, the U.S., and Latin America. For more information, please visit Photos accompanying this announcement are available at in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
4 minutes ago
- Fast Company
What is ‘self-evolving AI'? And why is it so scary?
BY As a technologist, and a serial entrepreneur, I've witnessed technology transform industries from manufacturing to finance. But I've never had to reckon with the possibility of technology that transforms itself. And that's what we are faced with when it comes to AI —the prospect of self-evolving AI. What is self-evolving AI? Well, as the name suggests, it's AI that improves itself—AI systems that optimize their own prompts, tweak the algorithms that drive them, and continually iterate and enhance their capabilities. Science fiction? Far from it. Researchers recently created the Darwin Gödel Machine, which is 'a self-improving system that iteratively modifies its own code.' The possibility is real, it's close—and it's mostly ignored by business leaders. And this is a mistake. Business leaders need to pay close attention to self-evolving AI, because it poses risks that they must address now. Self-Evolving AI vs. AGI It's understandable that business leaders ignore self-evolving AI, because traditionally the issues it raises have been addressed in the context of artificial general intelligence (AGI), something that's important, but more the province of computer scientists and philosophers. In order to see that this is a business issue, and a very important one, first we have to clearly distinguish between the two things. Self-evolving AI refers to systems that autonomously modify their own code, parameters, or learning processes, improving within specific domains without human intervention. Think of an AI optimizing supply chains that refines its algorithms to cut costs, then discovers novel forecasting methods—potentially overnight. AGI (Artificial General Intelligence) represents systems with humanlike reasoning across all domains, capable of writing a novel or designing a bridge with equal ease. And while AGI remains largely theoretical, self-evolving AI is here now, quietly reshaping industries from healthcare to logistics. The Fast Take-Off Trap One of the central risks created by self-evolving AI is the risk of AI take-off. Traditionally, AI take-off refers to the process by which going from a certain threshold of capability (often discussed as 'human-level') to being superintelligent and capable enough to control the fate of civilization. As we said above, we think that the problem of take-off is actually more broadly applicable, and specifically important for business. Why? The basic point is simple—self-evolving AI means AI systems that improve themselves. And this possibility isn't restricted to broader AI systems that mimic human intelligence. It applies to virtually all AI systems, even ones with narrow domains, for example AI systems that are designed exclusively for managing production lines or making financial predictions and so on. Once we recognize the possibility of AI take off within narrower domains, it becomes easier to see the huge implications that self-improving AI systems have for business. A fast take-off scenario—where AI capabilities explode exponentially within a certain domain or even a certain organization—could render organizations obsolete in weeks, not years. For example, imagine a company's AI chatbot evolves from handling basic inquiries to predict and influence customer behavior so precisely that it achieves 80%+ conversion rates through perfectly timed, personalized interactions. Competitors using traditional approaches can't match this psychological insight and rapidly lose customers. The problem generalizes to every area of business: within months, your competitor's operational capabilities could dwarf yours. Your five-year strategic plan becomes irrelevant, not because markets shifted, but because of their AI evolved capabilities you didn't anticipate. When Internal Systems Evolve Beyond Control Organizations face equally serious dangers from their own AI systems evolving beyond control mechanisms. For example: Monitoring Failure: IT teams can't keep pace with AI self-modifications happening at machine speed. Traditional quarterly reviews become meaningless when systems iterate thousands of times per day. Compliance Failure: Autonomous changes bypass regulatory approval processes. How do you maintain SOX compliance when your financial AI modifies its own risk assessment algorithms without authorization? Security Failure: Self-evolving systems introduce vulnerabilities that cybersecurity frameworks weren't designed to handle. Each modification potentially creates new attack vectors. Governance Failure: Boards lose meaningful oversight when AI evolves faster than they can meet or understand changes. Directors find themselves governing systems they cannot comprehend. Strategy Failure: Long-term planning collapses as AI rewrites fundamental business assumptions on weekly cycles. Strategic planning horizons shrink from years to weeks. Beyond individual organizations, entire market sectors could destabilize. Industries like consulting or financial services—built on information asymmetries—face existential threats if AI capabilities spread rapidly, making their core value propositions obsolete overnight. Catastrophizing to Prepare In our book TRANSCEND: Unlocking Humanity in the Age of AI, we propose the CARE methodology—Catastrophize, Assess, Regulate, Exit—to systematically anticipate and mitigate AI risks. Catastrophizing isn't pessimism; it's strategic foresight applied to unprecedented technological uncertainty. And our methodology forces leaders to ask uncomfortable questions: What if our AI begins rewriting its own code to optimize performance in ways we don't understand? What if our AI begins treating cybersecurity, legal compliance, or ethical guidelines as optimization constraints to work around rather than rules to follow? What if it starts pursuing objectives, we didn't explicitly program but that emerge from its learning process? Key diagnostic questions every CEO should ask so that they can identify organizational vulnerabilities before they become existential threats are: Immediate Assessment: Which AI systems have self-modification capabilities? How quickly can we detect behavioral changes? What monitoring mechanisms track AI evolution in real-time? Operational Readiness: Can governance structures adapt to weekly technological shifts? Do compliance frameworks account for self-modifying systems? How would we shut down an AI system distributed across our infrastructure? Strategic Positioning: Are we building self-improving AI or static tools? What business model aspects depend on human-level AI limitations that might vanish suddenly? Four Critical Actions for Business Leaders Based on my work with organizations implementing advanced AI systems, here are five immediate actions I recommend: Implement Real-Time AI Monitoring: Build systems tracking AI behavior changes instantly, not quarterly. Embed kill switches and capability limits that can halt runaway systems before irreversible damage. Establish Agile Governance: Traditional oversight fails when AI evolves daily. Develop adaptive governance structures operating at technological speed, ensuring boards stay informed about system capabilities and changes. Prioritize Ethical Alignment: Embed value-based 'constitutions' into AI systems. Test rigorously for biases and misalignment, learning from failures like Amazon's discriminatory hiring tool. Scenario-Plan Relentlessly: Prepare for multiple AI evolution scenarios. What's your response if a competitor's AI suddenly outpaces yours? How do you maintain operations if your own systems evolve beyond control? Early Warning Signs Every Executive Should Monitor The transition from human-guided improvement to autonomous evolution might be so gradual that organizations miss the moment when they lose effective oversight. Therefore, smart business leaders are sensitive to signs that reveal troubling escalation paths: AI systems demonstrating unexpected capabilities beyond original specifications Automated optimization tools modifying their own parameters without human approval Cross-system integration where AI tools begin communicating autonomously Performance improvements that accelerate rather than plateau over time Why Action Can't Wait As Geoffrey Hinton has warned, unchecked AI development could outstrip human control entirely. Companies beginning preparation now—with robust monitoring systems, adaptive governance structures, and scenario-based strategic planning—will be best positioned to thrive. Those waiting for clearer signals may find themselves reacting to changes they can no longer control. The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.

Business Insider
5 minutes ago
- Business Insider
OpenAI engineer's advice to high school students? 'Absolutely learn to code'
Szymon Sidor recently sounded off on why, even in a world with ChatGPT, Cursor, and other AI tools, it's still good for current high school students to learn basic coding. "You should absolutely learn to code," Sidor said during episode of the company's podcast from August. "One skill that is at a premium and will continue to be at a premium is having really structured intellect that can break down complicated problems into pieces." Sidor, who has worked at OpenAI for over eight years, said programming is a great way to learn such a skill. Andrew Mayne, who hosts the podcast and previously served as OpenAI's chief science communicator, questioned the notion that people would downplay learning to code simply because AI tools can do it for you. "Whenever I hear people say, 'Don't learn to code,' it's like, do I want an airplane pilot who doesn't understand aerodynamics? This doesn't make much sense to me," Mayne said. Their views echo some other top voices in the industry, including GitHub CEO Thomas Dohmke. Dohmke said in April that coding should be taught as a core subject. AI's coding abilities continue to advance. Meta CEO Mark Zuckerberg told Joe Rogan earlier this year that the company's AI would soon be able to write code that matches the output of a mid-level engineer. "The obvious tactical thing is just get really good at using AI tools. Like when I was graduating as a senior from high school, the obvious tactical thing was get really good at coding," Altman told Stratechery in March. "And this is the new version of that."