Latest news with #EthanMollick


Economic Times
4 days ago
- Business
- Economic Times
You can still outpace AI: Wharton professor reveals a ‘skill bundling' strategy to safeguard your future from automation
iStock Artificial Intelligence is rapidly changing workplaces. Wharton professor Ethan Mollick suggests professionals focus on roles requiring diverse human skills. These include emotional intelligence and creativity. (Image: iStock) As artificial intelligence reshapes the modern workplace with stunning speed, one Wharton professor has a sobering message for today's professionals: the safest jobs of tomorrow aren't necessarily the most technical—they're the most complex. Ethan Mollick, associate professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, says job security in the AI era will increasingly depend on choosing roles that bundle multiple human skills together. That means emotional intelligence, judgment, creativity, and domain expertise—all woven into one. 'AI may outperform you in one or two things,' Mollick tells CNBC Make It, 'but if your job requires five or six of them, it's a lot harder to replace.' It's the kind of insight that redefines how we think about employability in an increasingly automated world. And with AI usage surging—40% of U.S. workers now use it at least a few times a year, per a Gallup poll—these career choices have never mattered more. Mollick doesn't sugarcoat the AI wave ahead. Tech labs aren't just chasing progress—they're chasing a paradigm shift. 'Labs are aiming for machines smarter than humans within the next three years,' Mollick warns. 'They're betting on mass unemployment. Whether they succeed or not is still unclear, but we have to take it as a real possibility.' Even Nvidia CEO Jensen Huang, whose company powers some of the most advanced AI systems, echoes that sentiment—albeit from a different vantage point. In a recent All-In podcast, Huang predicted AI will create more millionaires in five years than the internet did in 20, while also cautioning: 'Anybody who is not using AI will lose their job to someone who is.' What's the solution? According to Mollick, job seekers must rethink their strategy. 'Don't go for roles that do one thing,' he says. 'Pick a job like being a doctor—where you're expected to be good at empathy, diagnosis, hand skills, and research. If AI helps with some of it, you still have the rest.' This idea of "bundled roles"—where a single job draws on varied skills and responsibilities—could be the firewall against replacement. These complex, human-centered positions are harder for AI to replicate wholesale and leave more room for humans to collaborate with AI, not compete against it. AI's evolution could make entry-level roles scarce—or at least, radically different. 'Companies will need to rethink entry-level hiring,' Mollick notes. 'Not just for productivity, but for training future leaders.' Without the chance to learn through repetition—what Mollick calls 'apprenticeship'—younger workers may miss out on foundational skills. The result could be a workforce with knowledge gaps AI can't fill, even as those same gaps are used to justify greater automation. Nvidia's Huang calls AI the 'greatest equalizer of our time' because it gives creative power to anyone who can express an idea. 'Everybody is a programmer now,' he says. But critics caution that this accessibility may also deepen divides between the AI-literate and those left behind. Eric Schmidt, former Google CEO, has a different concern: infrastructure. On the Moonshots podcast, Schmidt warned that AI's growth could be throttled not by chips, but by electricity. The U.S., he says, may need 92 more gigawatts of power to meet AI demands—equivalent to 92 new nuclear plants. As AI spreads into every corner of work, from payroll review (yes, Huang uses machine learning for that too) to high-stakes decision-making, the one thing that's clear is this: the rules are changing faster than most organizations can adapt. 'The tools are evolving fast,' Mollick says, 'but organizations aren't. And we can't ask employees to figure it all out on their own.' He believes the real danger isn't AI itself—but the lack of vision from leadership. Without a clear roadmap, workers are left adrift, trying to 'magic' their way into the future. In the race to stay relevant in the AI era, the best defense isn't to out-code or out-process a machine. It's to out-human it—by doubling down on the kind of nuanced, multi-layered work AI can't yet replicate. And by choosing jobs that ask you to wear many hats, not just one. Or as Mollick puts it: 'Bundled tasks are your best bet for surviving the AI takeover.'


CNBC
6 days ago
- Business
- CNBC
AI won't replace you just yet, Wharton professor says—but it'll be 'a huge concern' for entry-level workers
For many Americans, AI is rapidly changing the way we work. A growing number of workers now use AI at their jobs with some frequency. According to a recent Gallup poll, 40% of U.S. workers say that they use AI at work at least a few times a year, and 19% of workers use it several times a week. Both statistics have nearly doubled since last year, from 21% and 11%, respectively. At the same time, over half of American workers are worried about AI's impact on the workforce, according to a Pew Research Center survey. Their fears have merit: a World Economic Forum report published in January found that 48% of U.S. employers plan to reduce their workforce due to AI. Naturally, the rapid growth of AI in the workplace has raised plenty of questions. How will AI reshape our jobs? What new skills will we need to develop? Which industries will be impacted the most by AI? These questions don't have easy answers, says Ethan Mollick, an associate professor at Wharton and author of "Co-Intelligence: Living and Working with AI." Mollick, who is also the co-director of Wharton's Generative AI Labs, is well aware of concerns about AI replacing human jobs. "The idea that you could just sub in AI for people seems naive to me," he says. Still, as AI keeps improving, "there may be effects" for workers, he says. Here's what Mollick has to say about AI and the future of work. CNBC Make It: There's a lot of concern about AI replacing human jobs, including some big predictions from leaders like Bill Gates. What's your take on that? AI agents are not there yet. Right now, AI is good at some stuff, bad at some stuff, but it doesn't substitute well for human jobs, overall. It does some things quite well, but the goal of the labs is [to create] fully autonomous agents and machines smarter than human in the next 3 years. Do we know they can achieve it? We don't, but that is their bet. That's what they're aiming for. They are expecting and aiming for mass unemployment. That is what they keep telling us to prepare for. As for believing them or not, we just don't know, right? You have to take it as at least a possibility, but we're not there yet, either. A lot of it is also the choice of organizational leaders who get to decide how these systems are actually used, and organizational change is slower than all the labs and tech people think. A lot of the time, technology creates new jobs. That's possible, too. We just don't know the answer. As AI usage becomes more prevalent, what skills will we need to develop in the workforce? If you asked about AI skills a year ago, I would have said prompting skills. That doesn't matter as much anymore. We've been doing a lot of research, and it turns out that the prompts just don't matter the way they used to. So, you know, what does that leave us with? Well, judgment, taste, deep experience and knowledge. But you have to build those in some ways despite AI, rather than with their help. Having curiosity and agency also helps, but these are not really skills. I don't think using AI is going to be the hard thing for most people. What is the "hard thing," then? I think it's developing enough expertise to be able to oversee these systems. Expertise is gained by apprenticeship, which means doing some AI-level work [tasks that current AI models can do easily] over and over again, so you learn how to do something right. Why would anyone ever do that again? And that becomes a real challenge. We have to figure out how to solve that with a mix of education and training. How do you think AI will affect the entry-level job market? I think people are jumping to the conclusion that [AI is] why we're seeing youth unemployment. I don't think that's the issue yet, but I think that's a huge concern. Companies are going to have to view entry level jobs in some ways, not just as getting work done, but as a chance to get people who will become senior employees, and train them up to be that way, which is very different than how they viewed the work before. Are your students concerned about AI's impact on jobs? I think everybody's worrying about it, right? Consulting and banking, analyst roles and marketing roles — those are all jobs touched by AI. The more educated you are, the more highly paid you are, the more your job overlaps with AI. So I think everyone's very concerned and I don't have easy answers for them. The advice I tend to give people is to pick jobs that have as many 'bundled' tasks as possible. Think about doctors. You have a job where someone's supposed to be good at empathy and [surgical] hand skills and diagnosis and be able to run an office and keep up with the latest side of research. If AI helps you with some of those things, that's not a disaster. If AI can do one or two of those things better than you, that doesn't destroy your job, it changes what you do, and hopefully it lets you focus on the things you like best. So bundled jobs are more likely to be flexible than single thread jobs. How might AI adoption play out in the workplace? For me, the issue is that these tools are not really built as productivity tools. They're built as chatbots, so they work really well at the individual level, but that doesn't translate into something that can be stamped out across the entire team very easily. People are still figuring out how to operate with these things as teams. Do you bring it into every meeting and ask the AI questions in the middle of each meeting? Does everybody have their own AI campaign they're talking to? The piece I keep making a big deal about is that it is unfair to ask employees to figure it out. I'm seeing leadership and organizations say it's urgent to use AI, people will be fired without it, and then they have no articulation about what the future looks like. I want to hammer that point home, which is, without articulating a vision, where do we go? And that's the missing piece. It's not just up to everybody to figure it out. Instructors and college professors need to take an active role in shaping how AI is used. Leaders of organizations need to take an active role in shaping how AI is used. It can't just be, 'everyone figure it out and magic will happen.'


Forbes
07-07-2025
- Entertainment
- Forbes
Deep Research And More From New AI Models
Young vocalist with a microphone singing on stage Did you hate these annoying songs? Check out what new deep research AI tools can do. It's all well and good to tell people to get on ChatGPT, or Claude, or Gemini, and get their hands dirty using an AI model – but some people want more detail. They want to know how to use the best new tools in a more granular way. Deep research is one of the newest and most impressive functions of today's large language models. But most people haven't heard of it yet, let alone used it. Ethan Mollick is someone I associate with the vanguard of AI research. With former MIT connections, Mollick represents a leading voice in this field. So I was reading his latest blog post about different models, and how to use them. Mollick breaks down the most useful models for Claud, ChatGPT and Google in the following categories: 'chat,' 'work' and 'hard problems.' Claude, he notes, is generally better for privacy, where you need to select for this in ChatGPT, and may compromise some functionality with Google. Then Mollick embarks on a description of deep research capabilities that are baked into some of the newest models, but not yet available for free. This is so new that it's still paywalled and rationed for paying users. The Promise of Deep Research Functionality Deep research is available from OpenAI, and from Google, and from Anthropic, too. When you look at the unveiling of the research function, you see a lot of reports around February. So how do you access this kind of result? I found a recent post about this from the 'OpenAI cookbook' in June, where the poster talks about downloading a Python SDK, and API work. It turns out, though, that you can select 'deep research' in the prompt section of ChatGPT 4.5, if you are a paying Plus or Pro user, and get your deep research report that way. Asking for Deep Research from ChatGPT 4.5 I wanted to try this for myself, so based on what I was experiencing in the moment, I centered on compiling a report of the most annoying songs from 1990 through 2025. My query: 'please create a report on the most annoying music from 1990 until 2025, segmented annually.' Simple, right? First, ChatGPT asked me to clarify a few basic questions: 'To create a detailed report on the 'most annoying music' from 1990 to 2025, could you please clarify a few things: And I answered: '1. all of the above 2. US-based 3. multiple songs per year 4. Yes.' 'Thanks!' GPT responded. 'I'll compile a U.S.-focused report covering the most widely criticized, overplayed, or culturally mocked songs from 1990 to 2025. This will include multiple songs per year, along with context explaining why they were considered annoying—drawing from music critics, listener polls, memes, radio overplay complaints, and public opinion.' Then the report started. Indeed, it took more than 10 minutes to compile, with CoT displays like 'searching 'Rico Suave': worst song' and 'looking for Macarena.' Check Out GPT's Chain of Thought Actually, here are some of the interesting CoT notes the system provided while doing research on dozens of sites (it spent a lot of time reading the Guardian!) I'm mapping out recent headlines and thought pieces, like those from Global News and the Guardian, all noting "Baby Shark" as the most annoying song. Hm, that's interesting. I'm preparing the list of most annoying songs from 1993-1995, considering "What's Up," "Cotton Eye Joe," "Macarena," and "Wonderwall." However, I'm thinking through the inclusion of other notable tracks like "Rico Suave" for earlier years. (There's that 'Rico Suave' song again.) I'm mapping out the hits and misses of 2001-2006, with 2005's viral sensations like "Crazy Frog" and "Photograph" by Nickelback taking center stage. Hm, that's interesting—these tracks seem to evoke strong reactions. (Note how the model is actually 'taking notes' while researching, giving us little bits of personality along with the actual meat of the report.) Here's a funny comparison: I'm comparing the impact of Nickelback's "Rockstar" in 2006 to Soulja Boy's "Crank That" in 2007. The former was widely criticized, while the latter was loved by many, raising questions about musical taste over time. A research-based note: I'm digging through songs by genre and year, considering widespread opinion and specific examples to select the most frequently labeled annoying tracks. I'm evaluating hits like Meghan Trainor's "All About That Bass" (2014) and Tyga's "Stimulated" (2015) for their ubiquity and frequency of being labeled annoying. Progressing through last five years gradually. I'm digging through songs by genre and year, considering widespread opinion and specific examples to select the most frequently labeled annoying tracks. I'm evaluating hits like Meghan Trainor's "All About That Bass" (2014) and Tyga's "Stimulated" (2015) for their ubiquity and frequency of being labeled annoying. Progressing through last five years gradually. And then there was this: 'Noticing the challenge of time limitations, I'm figuring out how to balance inclusion of pivotal examples against the need for conciseness.' So the model is actually trying to triage its work. This is next-level stuff. Finally, GPT appeared to wrap up, dotting its I's and crossing its Ts: 'I'm checking to see if the 'Achy Breaky' song contains any references to anthropomorphic sharks.' Eventually, I got the full report. It ranges dozens of pages, and provides great coverage of over a quarter century of horrible, terrible music. A few standouts, with years: 1990 1994 1997 Fast-forward a bit: 2013 2023 Some Thoughts on Voice Mode Let's move back to the blog post: after delving into the above stuff, Mollick also covers the verbal capabilities of new systems that really change the interface. As he notes, you can talk to these models while you're in the car, or on a walk, or anywhere that you have the space and time. I think verbal communication is going to finally make LLM responses 'real' to us. And that's all I have to say about that – for now. I think all of this is deeply informing to people who want to be practical about LLM use. The deep research function in particular is something people will be learning about as time goes on. How many of us will figure it out before something new comes along that makes it obsolete? In any case, this was an excellent guide to how to make the most of AI in today's environment.


Forbes
30-06-2025
- Forbes
Employees: AI Is Giving Us Hours Back Every Week
Getting an AI boost There is no shortage of debate about the implications of artificial intelligence on productivity, with many suggestions that it is still elusive. Plus, there is the constant underlying dread reported among workers that AI is conducting a hostile takeover of their jobs. However, in study after study, when when asking affected workers themselves, a different picture emerges. They seem to like AI. Ethan Mollick, associate professor at the University of Pennsylvania's Wharton School, drives this point home in a recent LinkedIn post: 'The repeated argument that I see online that AI is not actually useful to real people needs to be retired, based on the representative national surveys we now have on real AI users. The idea that most people are not seeing gains from AI is just not matching what they are reporting in surveys – or what we find in experiments." Two recent studies cited by Mollick point to productivity gains, higher quality work, and yes, greater job satisfaction with the use of AI. For starters, teachers have something to teach us about AI and productivity. Those using AI report a time savings of at least six hours a week. The survey of 2,000 teachers in grades K through 12, conducted by Gallup and underwritten by the Walton Family Foundation, shows six in 10 have used an AI tool for their work this school year, with heavier use among high school (66%) and early-career teachers (69%). Teachers who use AI weekly save 5.9 hours per week, the study shows. That's the equivalent of six weeks per school year. Currently, about three in 10 teachers are using AI at least weekly, with more frequent users experiencing greater time savings A separate survey of 4,278 U.S. workers, conducted by researchers from Stanford, George Mason, and Clemson Universities, finds workers using AI report a three-fold productivity gains on many tasks. LLM adoption among employees has increased rapidly, the researchers found – from 30.1% as of December 2024, to 43.2% as of March and April 2025. The research team cites the use of ChatGPT and Generative AI behind the rapid increase. Among those who use genAI, about one third say they use it daily. On average, workers spend about 30 minutes working with a genAI tool per task. They estimate it takes about 90 minutes to manually complete such tasks – meaning they save at least one hour of time per task. This would be a tripling of the productivity of the workers if all tasks can be completed with the use of genAI, the researcher estimate. AI has a positive impact on the quality of work as well, the Gallup teachers' study shows. Most teachers, 64%, report higher quality in the modifications they make to student materials. Another 61% say they generate higher-quality insights about student learning or achievement data. A majority, 57%, say AI improves the quality of their grading and student feedback. 'Teachers who use AI are more likely to be optimistic about the impacts of AI on student outcomes,' the report states. Close to half, 48%, think AI will increase student engagement, compared with 25% of non-users.


Forbes
23-06-2025
- Business
- Forbes
Matching The Pace of AI Innovation To Maximize ROI
Michael Meucci is President and CEO of Arcadia, a healthcare data platform that helps payers and providers put their data to work. AI innovation is accelerating at a pace unlike anything we have seen before. Ethan Mollick, a well-known Wharton professor and thought leader on AI, put it best when he said that today's AI tools are the worst they will ever be. Meaning, while we've already seen AI tools advance significantly, AI is only getting better, faster and smarter, and we need to get ready for the AI capabilities of the future. In today's AI era, organizations are participating in a massive delegation exercise, trying to figure out which tasks can be fully delegated to AI and which tasks require human oversight. However, leaders and AI decision-makers must keep in mind what AI's future potential could look like when strategizing today. New research conducted by Bessemer Venture Partners, Amazon Web Services and Bain & Company shows that over half of healthcare organizations are seeing material ROI in 12 months of AI implementation. For even more healthcare organizations to maximize ROI, leaders must recognize that they should not simply replicate their existing workflows with AI. AI implementation must instead be a redefining process where leaders take a daring and holistic approach to match the pace of AI innovation. While some solutions may not be possible today, they may be in the future. Through this exercise, teams will identify where human intervention is necessary and establish appropriate guardrails and AI governance mechanisms. Organizations that take this approach will stay ahead of the AI innovation curve, enabling teams to build more effective workflows that allow employees to focus on high-value work that drives business growth. Enable employees to activate the AI strategy. AI is most powerful when organizations use it to redefine operations. Big organizational shifts are driven by small, incremental changes. Leaders should enable teams to think creatively about AI implementation and encourage employees to envision what a completely automated workflow could look like. If teams do not imagine what is possible without restrictions, they are unintentionally putting up barriers and stifling innovation. By coaching employees to imagine a blank canvas and design a fully automated workflow, organizations can uncover new opportunities for AI. For example, at my organization, Arcadia, an employee noticed colleagues asking similar questions across internal Slack channels, recognizing an opportunity to use AI to automate a process where humans were not needed. He created an AI tool that connects to multiple internal sources and provides automated answers to employees' questions in seconds, reducing the amount of time employees spend finding answers for each other and enabling them to remain focused on higher-value tasks. These types of low-risk, high-reward tools deliver an efficiency improvement of approximately nine orders of magnitude and underscore how AI-powered innovation is possible when leaders encourage teams to think outside the box. Once teams understand how fully automated workflows might operate, they can then start building the foundational requirements for optimization. A great example of an organization that is leaning into AI implementation is Kaiser Permanente, which recently published a case study on their AI-powered scribe rollout—including lessons learned. Among those takeaways, Kaiser surprisingly changed vendors between the initial pilot and final rollout of their AI scribe because they wanted the tool to be better integrated into their electronic health record system. As I have written about before, when AI is integrated into existing workflows, the outcomes are always better. At Kaiser, providers liked the initial AI scribe, but agile leaders pivoted quickly because they wanted the tool to work even better. They saw the full potential of this technology and decided to drive more meaningful impact for their business. To build the foundation for automated workflows, connecting tools that use various types of AI is key. In the Kaiser example, the AI scribe tool they ultimately chose uses agentic AI, which autonomously makes decisions and solves multi-step problems without human assistance, to transcribe a patient visit. Generative AI, which requires more human intervention as it solves less complex problems while relying on prompts to produce specific outputs, is layered in to produce a summary from the transcript. By stitching together tools that leverage different types of AI, teams are able to lay the foundation for the fully automated workflows of the future, while also helping eliminate administrative, time-consuming tasks and enabling employees to focus on the highest value work. Understand where human intervention is critical. For now, leveraging a more holistic AI implementation strategy does not mean that humans are fully out of the loop. I intentionally write 'for now' because it's difficult to predict how dramatically AI could transform our world. But today, this means that organizations have better insight into instances where a human touchpoint may need to be more heavy-handed. Many still look at integrating AI from an extreme perspective—either humans do everything or machines do everything. But I view the degree of human intervention as a spectrum. In healthcare, primary care delivery is an area where doctors should not be out of the loop (for now), but AI can help automate administrative tasks, like developing patient summaries. However, with many large pharmacy chains struggling financially, an example of an area primed for automation is pharmacy workflows, where automated prescription drug dispensing can help reduce the potential risk of error. When organizations have a deeper understanding of where human intervention is needed within workflows, they can establish more effective AI systems. Leaders and AI decision-makers must take teams out of their comfort zone and encourage them to think differently about AI implementation to match the unstoppable pace of AI innovation. Real transformation will take place when organizations implement AI tools holistically and stitch them into a cohesive system. This way of thinking will empower companies to make more strategic AI investments and harness the power of AI, driving more efficiency and growth for their business. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?