Latest news with #Co-Intelligence:LivingandWorkingwithAI


Time of India
3 days ago
- Business
- Time of India
Forget coding: Autodesk CEO pitches 'total systems thinking' as your ultimate shield against the AI takeover
The Rise of the Creative Orchestrator Coding Isn't Dead, But It's No Longer Elite From Four Roles to Two The Soft Skills AI Still Can't Touch A Leadership Wake-Up Call You Might Also Like: AI can't steal this one human skill, and it could be your ticket to career success before it catches up In a world racing to keep up with artificial intelligence , simply knowing how to code might no longer be the edge it once was. According to Autodesk CEO Andrew Anagnost, the skill that may truly help future-proof your career is something deeper, broader, and far more human: total systems thinking In an interview with Business Insider, Anagnost emphasized that as AI models become increasingly capable of writing code independently, the most valuable human contributions will come not from technical repetition, but from interdisciplinary insight. 'If the coding models are going to be doing the code for you, what's more important is that you understand this whole notion of systems-level and interdisciplinary thinking,' he who holds a Ph.D. in aeronautical engineering and computer science, is a strong advocate of breaking out of traditional disciplinary silos. He believes future job roles won't necessarily go to those who go deep into one niche area, unless they're aiming for research careers. Instead, the next big value-add will come from individuals who can connect the dots across different fields — and creatively manage the output of AI systems.'Humans will need to take the role of creative orchestrators,' Anagnost said, adding that it's not just about what is made but how it all fits together. In other words, those who can understand the broader picture of how a product is designed, built, and delivered — and how AI fits into that lifecycle — will be in shift in thinking is already playing out in workplaces as tools like GitHub Copilot and OpenAI's Codex automate increasingly complex coding tasks. 'There will be more people generating code than ever before,' Anagnost said. 'And many of them won't have backgrounds in computer science.'Rather than making coding obsolete, this democratization means that coding becomes just another tool — and not necessarily a distinguishing to Anagnost, a typical software company today employs a team that includes a product manager, designer, engineer, and quality assurance tester. But that's changing fast. In a near-future setup, he envisions a leaner model where a product designer collaborates directly with an AI coding assistant to handle both development and ties this streamlined workflow together? 'Total systems thinking,' he said. It's about knowing how the entire machine works — from vision to execution — and not just being a cog in the message aligns closely with Wharton professor Ethan Mollick, author of Co-Intelligence: Living and Working with AI. In a recent interview with CNBC Make It, Mollick argued that the safest roles in an AI-driven world aren't necessarily the most technical — they're the most complex.'AI may outperform you in one or two things,' Mollick said, 'but if your job requires five or six of them, it's a lot harder to replace.'He advises professionals to gravitate toward 'bundled roles' — jobs that blend empathy, judgment, creativity, and domain expertise. These roles are harder to automate, and more importantly, make room for humans to collaborate with AI rather than be replaced by unintended consequence of this shift could be the loss of traditional entry-level roles. As AI handles more of the repeatable grunt work, young professionals may have fewer chances to learn by doing. Without that foundational experience, Mollick warns, the pipeline for future leaders could be at and Anagnost agree on one thing: the real problem isn't AI — it's leadership lag. Companies must rethink hiring, training, and education models to adapt to this new world. The future belongs to those who can think broadly, manage complexity, and orchestrate outcomes with the help of intelligent you're planning your next career move — or even your college major — consider this: it's no longer just about learning to code. It's about understanding how systems connect, how humans and machines can co-create, and how creativity still holds the power to Anagnost's words, the future may hold fewer traditional computer science grads in software firms, 'but there'll probably be more people creating product than ever before.'


Economic Times
27-07-2025
- Business
- Economic Times
You can still outpace AI: Wharton professor reveals a ‘skill bundling' strategy to safeguard your future from automation
iStock Artificial Intelligence is rapidly changing workplaces. Wharton professor Ethan Mollick suggests professionals focus on roles requiring diverse human skills. These include emotional intelligence and creativity. (Image: iStock) As artificial intelligence reshapes the modern workplace with stunning speed, one Wharton professor has a sobering message for today's professionals: the safest jobs of tomorrow aren't necessarily the most technical—they're the most complex. Ethan Mollick, associate professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, says job security in the AI era will increasingly depend on choosing roles that bundle multiple human skills together. That means emotional intelligence, judgment, creativity, and domain expertise—all woven into one. 'AI may outperform you in one or two things,' Mollick tells CNBC Make It, 'but if your job requires five or six of them, it's a lot harder to replace.' It's the kind of insight that redefines how we think about employability in an increasingly automated world. And with AI usage surging—40% of U.S. workers now use it at least a few times a year, per a Gallup poll—these career choices have never mattered more. Mollick doesn't sugarcoat the AI wave ahead. Tech labs aren't just chasing progress—they're chasing a paradigm shift. 'Labs are aiming for machines smarter than humans within the next three years,' Mollick warns. 'They're betting on mass unemployment. Whether they succeed or not is still unclear, but we have to take it as a real possibility.' Even Nvidia CEO Jensen Huang, whose company powers some of the most advanced AI systems, echoes that sentiment—albeit from a different vantage point. In a recent All-In podcast, Huang predicted AI will create more millionaires in five years than the internet did in 20, while also cautioning: 'Anybody who is not using AI will lose their job to someone who is.' What's the solution? According to Mollick, job seekers must rethink their strategy. 'Don't go for roles that do one thing,' he says. 'Pick a job like being a doctor—where you're expected to be good at empathy, diagnosis, hand skills, and research. If AI helps with some of it, you still have the rest.' This idea of "bundled roles"—where a single job draws on varied skills and responsibilities—could be the firewall against replacement. These complex, human-centered positions are harder for AI to replicate wholesale and leave more room for humans to collaborate with AI, not compete against it. AI's evolution could make entry-level roles scarce—or at least, radically different. 'Companies will need to rethink entry-level hiring,' Mollick notes. 'Not just for productivity, but for training future leaders.' Without the chance to learn through repetition—what Mollick calls 'apprenticeship'—younger workers may miss out on foundational skills. The result could be a workforce with knowledge gaps AI can't fill, even as those same gaps are used to justify greater automation. Nvidia's Huang calls AI the 'greatest equalizer of our time' because it gives creative power to anyone who can express an idea. 'Everybody is a programmer now,' he says. But critics caution that this accessibility may also deepen divides between the AI-literate and those left behind. Eric Schmidt, former Google CEO, has a different concern: infrastructure. On the Moonshots podcast, Schmidt warned that AI's growth could be throttled not by chips, but by electricity. The U.S., he says, may need 92 more gigawatts of power to meet AI demands—equivalent to 92 new nuclear plants. As AI spreads into every corner of work, from payroll review (yes, Huang uses machine learning for that too) to high-stakes decision-making, the one thing that's clear is this: the rules are changing faster than most organizations can adapt. 'The tools are evolving fast,' Mollick says, 'but organizations aren't. And we can't ask employees to figure it all out on their own.' He believes the real danger isn't AI itself—but the lack of vision from leadership. Without a clear roadmap, workers are left adrift, trying to 'magic' their way into the future. In the race to stay relevant in the AI era, the best defense isn't to out-code or out-process a machine. It's to out-human it—by doubling down on the kind of nuanced, multi-layered work AI can't yet replicate. And by choosing jobs that ask you to wear many hats, not just one. Or as Mollick puts it: 'Bundled tasks are your best bet for surviving the AI takeover.'


Time of India
26-07-2025
- Business
- Time of India
You can still outpace AI: Wharton professor reveals a ‘skill bundling' strategy to safeguard your future from automation
As artificial intelligence reshapes the modern workplace with stunning speed, one Wharton professor has a sobering message for today's professionals: the safest jobs of tomorrow aren't necessarily the most technical—they're the most complex. Ethan Mollick, associate professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, says job security in the AI era will increasingly depend on choosing roles that bundle multiple human skills together. That means emotional intelligence, judgment, creativity, and domain expertise—all woven into one. 'AI may outperform you in one or two things,' Mollick tells CNBC Make It, 'but if your job requires five or six of them, it's a lot harder to replace.' Explore courses from Top Institutes in Please select course: Select a Course Category Project Management Data Science Operations Management Healthcare Data Analytics others Public Policy Product Management Others Digital Marketing Management Degree CXO Cybersecurity MBA Data Science healthcare Technology Leadership Finance MCA Artificial Intelligence Design Thinking PGDM Skills you'll gain: Portfolio Management Project Planning & Risk Analysis Strategic Project/Portfolio Selection Adaptive & Agile Project Management Duration: 6 Months IIT Delhi Certificate Programme in Project Management Starts on May 30, 2024 Get Details Skills you'll gain: Project Planning & Governance Agile Software Development Practices Project Management Tools & Software Techniques Scrum Framework Duration: 12 Weeks Indian School of Business Certificate Programme in IT Project Management Starts on Jun 20, 2024 Get Details It's the kind of insight that redefines how we think about employability in an increasingly automated world. And with AI usage surging—40% of U.S. workers now use it at least a few times a year, per a Gallup poll—these career choices have never mattered more. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Find out: this is how you clean your yoga mat! Kingdom Of Men Undo 'They're Aiming for Mass Unemployment' Mollick doesn't sugarcoat the AI wave ahead. Tech labs aren't just chasing progress—they're chasing a paradigm shift. 'Labs are aiming for machines smarter than humans within the next three years,' Mollick warns. 'They're betting on mass unemployment. Whether they succeed or not is still unclear, but we have to take it as a real possibility.' You Might Also Like: Nvidia CEO Jensen Huang calls AI the 'greatest equalizer of our time', predicts it will create more millionaires than the internet Even Nvidia CEO Jensen Huang , whose company powers some of the most advanced AI systems, echoes that sentiment—albeit from a different vantage point. In a recent All-In podcast, Huang predicted AI will create more millionaires in five years than the internet did in 20, while also cautioning: 'Anybody who is not using AI will lose their job to someone who is.' Pick the Job with Layers, Not Just Titles What's the solution? According to Mollick, job seekers must rethink their strategy. 'Don't go for roles that do one thing,' he says. 'Pick a job like being a doctor—where you're expected to be good at empathy, diagnosis, hand skills, and research. If AI helps with some of it, you still have the rest.' This idea of "bundled roles"—where a single job draws on varied skills and responsibilities—could be the firewall against replacement. These complex, human-centered positions are harder for AI to replicate wholesale and leave more room for humans to collaborate with AI, not compete against it. Gen Z's Entry-Level Catch-22 AI's evolution could make entry-level roles scarce—or at least, radically different. 'Companies will need to rethink entry-level hiring,' Mollick notes. 'Not just for productivity, but for training future leaders.' You Might Also Like: Former Google CEO Eric Schmidt warns of AI superintelligence outpacing Earth's energy limits: 'Chips will outrun power needs' Without the chance to learn through repetition—what Mollick calls 'apprenticeship'—younger workers may miss out on foundational skills. The result could be a workforce with knowledge gaps AI can't fill, even as those same gaps are used to justify greater automation. AI's Double-Edged Sword: Democratizer or Divider? Nvidia's Huang calls AI the 'greatest equalizer of our time' because it gives creative power to anyone who can express an idea. 'Everybody is a programmer now,' he says. But critics caution that this accessibility may also deepen divides between the AI-literate and those left behind. Eric Schmidt , former Google CEO, has a different concern: infrastructure. On the Moonshots podcast, Schmidt warned that AI's growth could be throttled not by chips, but by electricity. The U.S., he says, may need 92 more gigawatts of power to meet AI demands—equivalent to 92 new nuclear plants. As AI spreads into every corner of work, from payroll review (yes, Huang uses machine learning for that too) to high-stakes decision-making, the one thing that's clear is this: the rules are changing faster than most organizations can adapt. AI's Real Disruption? Leadership That Lags 'The tools are evolving fast,' Mollick says, 'but organizations aren't. And we can't ask employees to figure it all out on their own.' He believes the real danger isn't AI itself—but the lack of vision from leadership. Without a clear roadmap, workers are left adrift, trying to 'magic' their way into the future. In the race to stay relevant in the AI era, the best defense isn't to out-code or out-process a machine. It's to out-human it—by doubling down on the kind of nuanced, multi-layered work AI can't yet replicate. And by choosing jobs that ask you to wear many hats, not just one. Or as Mollick puts it: 'Bundled tasks are your best bet for surviving the AI takeover.'


CNBC
25-07-2025
- Business
- CNBC
AI won't replace you just yet, Wharton professor says—but it'll be 'a huge concern' for entry-level workers
For many Americans, AI is rapidly changing the way we work. A growing number of workers now use AI at their jobs with some frequency. According to a recent Gallup poll, 40% of U.S. workers say that they use AI at work at least a few times a year, and 19% of workers use it several times a week. Both statistics have nearly doubled since last year, from 21% and 11%, respectively. At the same time, over half of American workers are worried about AI's impact on the workforce, according to a Pew Research Center survey. Their fears have merit: a World Economic Forum report published in January found that 48% of U.S. employers plan to reduce their workforce due to AI. Naturally, the rapid growth of AI in the workplace has raised plenty of questions. How will AI reshape our jobs? What new skills will we need to develop? Which industries will be impacted the most by AI? These questions don't have easy answers, says Ethan Mollick, an associate professor at Wharton and author of "Co-Intelligence: Living and Working with AI." Mollick, who is also the co-director of Wharton's Generative AI Labs, is well aware of concerns about AI replacing human jobs. "The idea that you could just sub in AI for people seems naive to me," he says. Still, as AI keeps improving, "there may be effects" for workers, he says. Here's what Mollick has to say about AI and the future of work. CNBC Make It: There's a lot of concern about AI replacing human jobs, including some big predictions from leaders like Bill Gates. What's your take on that? AI agents are not there yet. Right now, AI is good at some stuff, bad at some stuff, but it doesn't substitute well for human jobs, overall. It does some things quite well, but the goal of the labs is [to create] fully autonomous agents and machines smarter than human in the next 3 years. Do we know they can achieve it? We don't, but that is their bet. That's what they're aiming for. They are expecting and aiming for mass unemployment. That is what they keep telling us to prepare for. As for believing them or not, we just don't know, right? You have to take it as at least a possibility, but we're not there yet, either. A lot of it is also the choice of organizational leaders who get to decide how these systems are actually used, and organizational change is slower than all the labs and tech people think. A lot of the time, technology creates new jobs. That's possible, too. We just don't know the answer. As AI usage becomes more prevalent, what skills will we need to develop in the workforce? If you asked about AI skills a year ago, I would have said prompting skills. That doesn't matter as much anymore. We've been doing a lot of research, and it turns out that the prompts just don't matter the way they used to. So, you know, what does that leave us with? Well, judgment, taste, deep experience and knowledge. But you have to build those in some ways despite AI, rather than with their help. Having curiosity and agency also helps, but these are not really skills. I don't think using AI is going to be the hard thing for most people. What is the "hard thing," then? I think it's developing enough expertise to be able to oversee these systems. Expertise is gained by apprenticeship, which means doing some AI-level work [tasks that current AI models can do easily] over and over again, so you learn how to do something right. Why would anyone ever do that again? And that becomes a real challenge. We have to figure out how to solve that with a mix of education and training. How do you think AI will affect the entry-level job market? I think people are jumping to the conclusion that [AI is] why we're seeing youth unemployment. I don't think that's the issue yet, but I think that's a huge concern. Companies are going to have to view entry level jobs in some ways, not just as getting work done, but as a chance to get people who will become senior employees, and train them up to be that way, which is very different than how they viewed the work before. Are your students concerned about AI's impact on jobs? I think everybody's worrying about it, right? Consulting and banking, analyst roles and marketing roles — those are all jobs touched by AI. The more educated you are, the more highly paid you are, the more your job overlaps with AI. So I think everyone's very concerned and I don't have easy answers for them. The advice I tend to give people is to pick jobs that have as many 'bundled' tasks as possible. Think about doctors. You have a job where someone's supposed to be good at empathy and [surgical] hand skills and diagnosis and be able to run an office and keep up with the latest side of research. If AI helps you with some of those things, that's not a disaster. If AI can do one or two of those things better than you, that doesn't destroy your job, it changes what you do, and hopefully it lets you focus on the things you like best. So bundled jobs are more likely to be flexible than single thread jobs. How might AI adoption play out in the workplace? For me, the issue is that these tools are not really built as productivity tools. They're built as chatbots, so they work really well at the individual level, but that doesn't translate into something that can be stamped out across the entire team very easily. People are still figuring out how to operate with these things as teams. Do you bring it into every meeting and ask the AI questions in the middle of each meeting? Does everybody have their own AI campaign they're talking to? The piece I keep making a big deal about is that it is unfair to ask employees to figure it out. I'm seeing leadership and organizations say it's urgent to use AI, people will be fired without it, and then they have no articulation about what the future looks like. I want to hammer that point home, which is, without articulating a vision, where do we go? And that's the missing piece. It's not just up to everybody to figure it out. Instructors and college professors need to take an active role in shaping how AI is used. Leaders of organizations need to take an active role in shaping how AI is used. It can't just be, 'everyone figure it out and magic will happen.'


Daily Maverick
01-05-2025
- Daily Maverick
SA's tertiary institutions have to adapt and embrace AI — it's not going away
On 5 April, Daily Maverick published an investigative article on artificial intelligence (AI) use in higher education, coining the eye-catching term 'CheatGPT' and placing the burden of ethical AI use solely on students, rather than examining the institutional readiness to guide it. By framing students as cheating villains and educators as helpless victims, the article misses the real story: why has AI caught some of South Africa's top-ranked traditional institutions off guard? Students are using AI because the world is using AI. And in a world increasingly defined by AI fluency, our universities should be leading the way. Integrating emerging AI tools into education is hardly radical. Unesco's 2023 Guidance for Generative AI in Education and Research urges universities to teach students responsible and ethical AI use. Similarly, the OECD's AI and the Future of Skills calls for digital fluency, critical thinking and adaptability. These are common-sense global expectations for thriving in an AI-driven world. The realities of being human in an AI world AI can compute, analyse data and generate answers with stunning speed. But people ask the deeper questions. We grasp nuance, sense context and recognise when something feels off. We pause, reflect and choose. Our value in the AI era lies in amplifying what makes us distinctly human: curiosity, creativity, empathy, judgment and responsibility. We are part of a long continuum — from stone tools to smart algorithms. Each tool changes how we live and think, but the principle remains: people shape tools, not the other way around. Being human today means learning to work with AI without outsourcing our thinking or compromising our integrity. It's no longer about knowing everything, but about knowing how to learn, ask discerning questions and challenge AI with insight. This calls for a new literacy — not just technical, but ethical and social. We must understand AI's capabilities, its limitations and its implications. In Co-Intelligence: Living and Working with AI, Ethan Mollick writes that the future is not about humans versus AI, but humans and AI together. AI is fast and scalable. We are moral, imaginative and adaptable. The real opportunity lies in deliberate collaboration. SA higher education should lead, not fear A swirl of fears surrounds AI in education. Students may worry about unfair advantages through AI misuse. Educators and administrators may fear being displaced by technology. Institutions fear academic dishonesty, reputational risk and the escalating costs of digital transformation. Yet, allowing fear to dominate the conversation leads to reactive, backward-looking decisions. Like our global peers, South African universities must embrace their role as leaders in AI literacy. Counter-measures such as lockdown browsers, handwritten essays and timed invigilated tests are not marks of integrity — they are symptoms of institutional panic. These approaches reflect a legacy system reluctant to evolve. There are more constructive paths forward. Staged, process-based assessments can trace how student ideas develop. Reflective tasks can require students to explain their thinking and how they used AI. Oral defences and collaborative projects make authorship and understanding transparent. Real-world briefs can treat AI as a tool — as it is in the workplace. Such strategies don't just deter misuse. They develop better thinkers. AI use in student work is not a crisis demanding a retreat to outdated testing regimes. It is a powerful catalyst for renewing long-questioned assessment systems — systems often divorced from meaningful learning. Rising to this challenge could finally deliver long-overdue reforms in how universities measure learning and competence. At the South African College of Applied Psychology, we don't believe students need more surveillance. They need better guidance. We don't believe educators need a crisis. They need AI literacy strategies. South African universities and colleges have a clear mission: to prepare young people not just for the world of work, but for meaningful lives in an AI era. AI is not going away. No ban in a lecture hall will change that. Let's be highly effective at teaching and guiding our students, faculties and administrators to understand and use AI well. Anything less is negligence. DM