Latest news with #EthanMollick

The Australian
a day ago
- Business
- The Australian
Rise of the digital workforce: rethinking work in the age of agentic AI
What is perceived as science fiction today becomes mainstream tomorrow – and transformative the day after. Such is the progression of generative AI, and now, agentic AI. We may not have all the answers yet. But the questions are becoming clearer. And the organisations that ask them early – and act boldly – will shape the future of work for the better. As Professor Ethan Mollick said recently, 'The time to begin isn't when everything becomes clear – it's now, while everything is still messy and uncertain. The advantage goes to those willing to learn fastest.' The pace at which AI is evolving is staggering. Agentic AI – autonomous systems capable of reasoning, learning, and acting independently – are no longer a theoretical concept. Agents are already executing human tasks, orchestrating workflows, and adapting through interactions with both humans and other agents. It's only getting faster as enterprise software players, hyperscalers, platform providers, frontier labs and new agentic product start-ups are innovating and releasing capabilities into market at a blistering pace. The short of it really is that we've well and truly entered a new era of transformation – and what we're witnessing is the rise of a digital workforce. To harness its full potential, we must move beyond outdated paradigms – especially the one-to-one thinking that equates digital labour to human labour in direct substitution. Human capacity is finite. Digital labour is not. It's a limitless, scalable, always-on capacity that can multiply effort, insight, and creativity at a scale and speed that previously was not possible. When we break this outdated paradigm, and rethink how we work, the opportunities look very different. Stu Scotis, National GenAI Lead at Deloitte Australia Picture a marketing team empowered by AI agents capable of simulating hundreds of thousands of campaigns, then surfacing the top-performing strategies for a human to evaluate. Or a sales force supported by thousands of virtual assistants, each tailoring offers to individual customer profiles based on real-time analysis of preferences, history, and behaviour. Or a finance team where CFOs have thousands of digital finance analysts. These examples are just a starting point, and exciting as they are, even these are constrained by today's thinking of structure and work. We're not just talking about automation for productivity – it's a reinvention of how we work. It demands a wholesale redesign of how we think about workflows, roles, and even how value is created. This is happening now and if you're following this space closely, you'll have seen headlines with high-profile CEOs setting directives on AI usage by employees with AI first strategies. We're also seeing examples of even bolder moves with some organisations merging HR and IT departments as the line between managing technology and managing people becomes increasingly blurred with agents. These organisations are going beyond surface-level integration and not just bolting AI onto existing systems – they are reimagining those systems entirely. They are looking at core functions such as customer service, product development, HR, and operations to be restructured and redesigned to take full advantage of AI's capabilities. Looking ahead, leadership roles also need to be redesigned as we consider the digital workforce. To date, leadership has been built around managing people, now we need managers who orchestrate fleets of AI agents as well as human teams. Setting clear expectations, evaluating outputs, and defining what 'good' looks like are quickly becoming core competencies for leaders as they take on accountability to transform their organisations with AI. Another essential question for every organisation is this: how far will you allow automation to proliferate? The capability is here – but are your systems, culture and people prepared? Agentic AI can perform complex tasks end-to-end, but without clear governance and ethical guidelines, it can introduce real risk. The path forward involves deliberate decisions about where to retain human oversight, where to build in safeguards, and how to ensure transparency in automated processes. What the end state looks like when functions, organisations or even sectors are redesigned around AI is not yet clear. But waiting isn't an option. Those who progress the fastest will gain significant, if not impassable, competitive advantage. We might not want to be in a race with AI – but we are. It's a global race, and the stakes are high. Productivity, competitiveness and economic growth are all on the line. And as the pace of technological change accelerates, so must our ability to act with clarity and intent. The race leaders will be those who are already laying the groundwork to rebuild, rethink and reinvent around AI. We've got a lot more to say about how organisations should be planning to shape the future of work with a sustained focus on delivering scale and value. Watch this space! Stu Scotis is National GenAI Lead at Deloitte Australia. - Disclaimer This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication. About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ('DTTL'), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. Please see to learn more. Copyright © 2025 Deloitte Development LLC. All rights reserved. -


Axios
6 days ago
- Business
- Axios
Secret chatbot use causes workplace rifts
More employees are using generative AI at work and many are keeping it a secret. Why it matters: Absent clear policies, workers are taking an "ask forgiveness, not permission" approach to chatbots, risking workplace friction and costly mistakes. The big picture: Secret genAI use proliferates when companies lack clear guidelines, because favorite tools are banned or because employees want a competitive edge over coworkers. Fear plays a big part too — fear of being judged and fear that using the tool will make it look like they can be replaced by it. By the numbers: 42% of office workers use genAI tools like ChatGPT at work and one in three of those workers say they keep the use secret, according to research out this month from security software company Ivanti. A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are. 20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals. Catch up quick: When ChatGPT first wowed workers over two years ago, companies were unprepared and worried about confidential business information leaking into the tool, so they preached genAI abstinence. Now the big AI firms offer enterprise products that can protect IP and leaders are paying for those bespoke tools and pushing hard for their employees to use them. The blanket bans are gone, but the stigma remains. Zoom in: New research backs up workers' fear of the optics around using AI for work. A recent study from Duke University found that those who use genAI "face negative judgments about their competence and motivation from others." Yes, but: The Duke study also found that workers who use AI more frequently are less likely to perceive potential job candidates as lazy if they use AI. Zoom out: The stigma around genAI can lead to a raft of problems, including the use of unauthorized tools, known as "shadow AI" or BYOAI (bring your own AI). Research from cyber firm Prompt Security found that 65% of employees using ChatGPT rely on its free tier, where data can be used to train models. Shadow AI can also hinder collaboration. Wharton professor and AI expert Ethan Mollick calls workers using genAI for individual productivity " secret cyborgs" who keep all their tricks to themselves. "The real risk isn't that people are using AI — it's pretending they're not," Amit Bendov, co-founder and CEO of Gong, an AI platform that analyzes customer interactions, told Axios in an email. Between the lines: Employees will use AI regardless of whether there's a policy, says Coursera's chief learning officer, Trena Minudri. Leaders should focus on training, she argues. (Coursera sells training courses to businesses.) Workers also need a "space to experiment safely," Minudri told Axios in an email. The tech is changing so fast that leaders need to acknowledge that workplace guidelines are fluid. Vague platitudes like "always keep a human in the loop" aren't useful if workers don't understand what the loop is or where they fit into it. GenAI continues to struggle with accuracy and companies risk embarrassing gaffes, or worse, when unchecked AI-generated content goes public. Clearly communicating these issues can go a long way in helping employees feel more comfortable opening up about their AI use, Atlassian CTO Rajeev Rajan told Axios. "Our research tells us that leadership plays a big role in setting the tone for creating a culture that fosters AI experimentation," Rajan said in an email. "Be honest about the gaps that still exist." The bottom line: Encouraging workers to use AI collaboratively could go a long way to ending the secrecy.


Forbes
24-05-2025
- Business
- Forbes
From AI Productivity To Organizational Power: An AI Era Blueprint
Using a Large Language Model is More Like Playing This Organ Than a Spreadsheet Wanamaker organ. I interact with many organizations and five consistent questions arise: This is the first of a three part series that will attempt to answer these questions for business leaders. The advice will be humble because things are moving so quickly, executives must act, but also remain curious and flexible. Globally there are billions of dollars flowing into AI. Every nation state sees that AI is central to national defense, and machines learn faster than humans. This flurry of factors means innovation in AI and the symbiosis of people and machines is going to accelerate at an accelerating rate. Generative AI has rapidly moved from novelty to necessity, with over one third of workers willing to pay for the tools themselves. In a recent blog post Ethan Mollick noted that even thought the research shows individual level productivity of 10-40% or more, firms realize much lower levels of at the organizational level. This gap isn't merely about technology readiness. It's about leadership, learning, and strategic integration. In the rush to adopt AI, many organizations are missing the deeper transformation required to turn individual speed into institutional acceleration. To bridge this divide, leaders need a new playbook for preparedness—centered on four critical imperatives: Content, Capability, Community, and Curation. In the Genenerative AI age, leaders can no longer rely on their traditional content diet. AI changes too fast. The best ideas now emerge not just from Harvard Business Review or McKinsey reports but from research labs, GitHub repositories, YouTube, X, LinkedIn, expert newsletters, research papers, product announcements and many other sources. There is robust scientific evidence that people with more expertise in an area perceive the world in a new way. My former colleague Prof. Jim Cash used to call this more educated perspective, 'visual literacy'. Leaders who are not schooled in what is happening in AI just can't see the opportunities and possibilities. Therefore, if you want your organization to understand the potential for AI, you need to read and consume new information and inputs. The goal isn't technical mastery—it's strategic literacy. Content Changes Minds Leaders Need New Content sources AI-literate leaders ask better questions. They spot emerging opportunities faster. And crucially, they model curiosity for the rest of the organization. You need to change the content you consume before you can change the context in which you lead. Working with a large language model is not like implementing a technology, it's more like playing an enormously complex instrument. If you walked up to the Midmer-Losh organ at Boardwalk Hall in Atlantic City New Jersey has 33,112 pipes, 7 keyboards, hundreds of stops, and over a dozen foot pedals and play chopsticks on the middle keyboard, it's not that impressive. The reason it's not impressive is your lack of playing ability, not the quality of the instrument. The only solution for this is to have hands on experience for everyone in the firm – especially the senior management team and the board. Leaders Build New Skills – To Win A New Race Knowledge and Know How Are Critical For The Entire Leadership Team Capability-building must be enterprise-wide. Some firms are launching 'AI centers of excellence.' Others pair junior 'AI natives' with senior managers to cross-train. All should ensure that learning is active, continuous, and contextual—focused not just on the tool, but also its use in the firm. As Generative AI reshapes job roles, the capability gap becomes a bottleneck. The future belongs to AI-complemented workers, not just AI-aware ones. In fast-moving domains like AI, no single organization can learn fast enough alone. That's why executives must foster internal and external learning communities—spaces where employees, peers, and partners can share use cases, prompts, failures, and insights. Internally, this might mean AI guilds, cross-functional prompt libraries, or weekly 'show-and-tell' sessions. Externally, it means plugging into peer forums, academic collaborations, conferences and vendor ecosystems. AI Communities Amplify Learning Leaders Learn From Leaders Crucially, these communities should include both technical and non-technical voices. The best Generative AI ideas often come from unexpected places—an HR manager using GPT to rewrite job descriptions, or a field technician using vision models to pre-screen repairs. Communities not only accelerate learning—they amplify culture. They signal that AI is not a top-down mandate, but a shared frontier. Most organizations are still using Genenerative AI tools as consumer products—ChatGPT for writing emails, Midjourney for visuals. But the future of enterprise AI lies in curated stacks of specialized tools, integrated with workflows, data, and business logic. Few organizations are happy with simply waiting for their main vendor, whether it is Microsoft, Google, Salesforce, or others, to serve all their AI needs. The innovation lanscape is just too varied and wide. You Can't Just Wait For Big Suppliers To Move You Fast Enough The World Is Abuzz With Innovation: Curate a Wider Network of Supply. Executives must curate their own ecosystem of AI suppliers strategically. That means identifying best-in-class tools (from copilots to agents to orchestration layers), vetting vendors for security and compliance, and creating internal 'AI marketplaces' for easy access. Equally important is curating the right partnerships. Collaborate with startups, integrate open-source models, engage with AI research hubs. The supply chain of intelligence is changing—don't wait for the big consultancies to catch up. Also, you will need to have a group of main stream big suppliers as well as a network of small innovators. Remember: every supplier choice you make either accelerates or inhibits transformation. The first productivity revolution is personal. The second must be AI era won't reward passive adopters. Organizations that treat Generative AI as a set of apps will see temporary productivity bumps—followed by competitive obsolescence. Those that lead strategically, through the 4 Cs (Content, Capability, Community, and Curation) of AI Preparedness, will not only boost productivity but reinvent how they operate. Leaders who embrace this moment with intentionality will shape the future of their organizations—not just survive it.


Daily Maverick
01-05-2025
- Daily Maverick
SA's tertiary institutions have to adapt and embrace AI — it's not going away
On 5 April, Daily Maverick published an investigative article on artificial intelligence (AI) use in higher education, coining the eye-catching term 'CheatGPT' and placing the burden of ethical AI use solely on students, rather than examining the institutional readiness to guide it. By framing students as cheating villains and educators as helpless victims, the article misses the real story: why has AI caught some of South Africa's top-ranked traditional institutions off guard? Students are using AI because the world is using AI. And in a world increasingly defined by AI fluency, our universities should be leading the way. Integrating emerging AI tools into education is hardly radical. Unesco's 2023 Guidance for Generative AI in Education and Research urges universities to teach students responsible and ethical AI use. Similarly, the OECD's AI and the Future of Skills calls for digital fluency, critical thinking and adaptability. These are common-sense global expectations for thriving in an AI-driven world. The realities of being human in an AI world AI can compute, analyse data and generate answers with stunning speed. But people ask the deeper questions. We grasp nuance, sense context and recognise when something feels off. We pause, reflect and choose. Our value in the AI era lies in amplifying what makes us distinctly human: curiosity, creativity, empathy, judgment and responsibility. We are part of a long continuum — from stone tools to smart algorithms. Each tool changes how we live and think, but the principle remains: people shape tools, not the other way around. Being human today means learning to work with AI without outsourcing our thinking or compromising our integrity. It's no longer about knowing everything, but about knowing how to learn, ask discerning questions and challenge AI with insight. This calls for a new literacy — not just technical, but ethical and social. We must understand AI's capabilities, its limitations and its implications. In Co-Intelligence: Living and Working with AI, Ethan Mollick writes that the future is not about humans versus AI, but humans and AI together. AI is fast and scalable. We are moral, imaginative and adaptable. The real opportunity lies in deliberate collaboration. SA higher education should lead, not fear A swirl of fears surrounds AI in education. Students may worry about unfair advantages through AI misuse. Educators and administrators may fear being displaced by technology. Institutions fear academic dishonesty, reputational risk and the escalating costs of digital transformation. Yet, allowing fear to dominate the conversation leads to reactive, backward-looking decisions. Like our global peers, South African universities must embrace their role as leaders in AI literacy. Counter-measures such as lockdown browsers, handwritten essays and timed invigilated tests are not marks of integrity — they are symptoms of institutional panic. These approaches reflect a legacy system reluctant to evolve. There are more constructive paths forward. Staged, process-based assessments can trace how student ideas develop. Reflective tasks can require students to explain their thinking and how they used AI. Oral defences and collaborative projects make authorship and understanding transparent. Real-world briefs can treat AI as a tool — as it is in the workplace. Such strategies don't just deter misuse. They develop better thinkers. AI use in student work is not a crisis demanding a retreat to outdated testing regimes. It is a powerful catalyst for renewing long-questioned assessment systems — systems often divorced from meaningful learning. Rising to this challenge could finally deliver long-overdue reforms in how universities measure learning and competence. At the South African College of Applied Psychology, we don't believe students need more surveillance. They need better guidance. We don't believe educators need a crisis. They need AI literacy strategies. South African universities and colleges have a clear mission: to prepare young people not just for the world of work, but for meaningful lives in an AI era. AI is not going away. No ban in a lecture hall will change that. Let's be highly effective at teaching and guiding our students, faculties and administrators to understand and use AI well. Anything less is negligence. DM


Forbes
08-04-2025
- Entertainment
- Forbes
Mollick Presents The Meaning Of New Image Generation Models
Paintbrush dynamically illustrates the innovative concept of generative AI art. This mesmerizing ... More image captures the essence of creativity and automation in the realm of digital masterpieces. Witness the fusion of human imagination and artificial intelligence as strokes of the brush evolve into intricate patterns, showcasing the potential of neural networks and creative evolution. This visual journey limitless and where technology transforms the canvas of artistic expression. What does it mean when AI can build smarter pictures? We found out a few weeks ago as both Google and OpenAI unveiled new image generation models that are fundamentally different than what has come before. A number of important voices chimed in on how this is likely to work, but I didn't yet cover this timely piece by Ethan Mollick at One Useful Thing, in which the MIT graduate looks at these new models in a detailed way, and evaluates how they work and what they're likely to mean to human users. The Promise of Multimodal Image Generation Essentially, Mollick explains that the traditional image generation systems were a handoff from one model to another. 'Previously, when a Large Language Model AI generated an image, it wasn't really the LLM doing the work,' he writes. 'Instead, the AI would send a text prompt to a separate image generation tool and show you what came back. The AI creates the text prompt, but another, less intelligent system creates the image.' Diffusion Models Are So 2021 The old models also mostly used diffusion to work. How does diffusion work? The traditional models have a single dimension that they use to generate images. I remember a year ago I was writing an explanation for an audience of diffusion by my colleague Daniela Rus, who presented it at conferences. It goes something like this – the diffusion model takes an image, introduces noise, and abstracts the image, before denoising it again to form a brand new image that resembles what the computer already knows from looking at images that match the prompt. Here's the thing – if that's all the model does, you're not going to get an informed picture. You're going to get a new picture that looks like a prior picture, or more accurately, thousands of pictures that the computer saw on the Internet, but you're not going to get a picture with actionable information that's reasoned and considered by the model itself. Now we have multimodal control, and that's fundamentally different. No Elephants? Mollick gives the example of a prompt that asks the model to create an image without elephants in the room, showing why there are no elephants in the room. Here's the prompt: 'show me a room with no elephants in it, make sure to annotate the image to show me why there are no possible elephants.' When you hand this to a traditional model, it shows you some elephants, because it doesn't understand the context of the prompt, or what it means. Furthermore, a lot of the text that you'll get is complete nonsense, or even made-up characters. That's because the model didn't know what letters actually looked like – it was getting that from training data, too. Mollick shows when you hand the same prompt to a multimodal model. It gives you exactly what you want – a room with no elephants, and notes like 'the door is too small' showing why the elephants wouldn't be in there. Challenges of Prompting Traditional Models I know personally that this was how the traditional models worked. As soon as you asked them not to put something in, they would put it in, because they didn't understand your request. Another major difference is that traditional models would change the fundamental image every time you ask for a correction or a tweak. Suppose you had an image of a person, and you asked for a different hat. You might get an image of an entirely different person. The multimodal image generation models know how to preserve the result that you wanted, and just change it in one single small way. Preserving Habitats Mollick gives another example of how this works: he shows an otter with a particular sort of display in its hands. Then the otter appears in different environments with different styles of background. This also shows the detailed integration of multi Moto image generators. A whole pilot deck. For a used case scenario BB shows how you could take one of these multimodal models and have it designed an entire pitch deck for guacamole or anything else? All you have to do is say come up with this type of deck and the model will get right to work looking at what else is on the Internet, Synthesizing it and giving you the result. As Mick mentions this will make all sorts of human work obsolete very quickly. We will need well considered framework