logo
#

Latest news with #McKinsey

As McKinsey maps the Agentic AI future, one ASX name is already in the mix
As McKinsey maps the Agentic AI future, one ASX name is already in the mix

News.com.au

time3 hours ago

  • Business
  • News.com.au

As McKinsey maps the Agentic AI future, one ASX name is already in the mix

AI agents are now built for complexity, not novelty Decidr lands in the sweet spot of automation McKinsey's trillion-dollar prediction is already underway Something big is stirring in the world of AI, and consultants at world-renowned McKinsey have given it a name. They're calling this moment the 'Dawn of Agentic AI,' a shift from machines that think to machines that do. They're talking about autonomous digital workers that plan, act, and learn, leaving the old guard of rule-based systems in the dust. McKinsey says these systems, built atop foundation models, can break down multi-step tasks, talk to humans, and actually deliver real outcomes. It's the leap from 'tell me more' to 'get it done'. McKinsey says AI agents will work with people, taking on the grunt work so teams can focus on what actually needs human brainpower. They call it 'human in the loop.' The idea is to automate complexity, not creativity. It's not about replacing people, but supporting them, especially in jobs that involve high-volume, high-friction tasks that slow teams down. This is how you unlock real AI value, say McKinsey McKinsey also reckons most companies are going about it wrong. The temptation is to bolt AI agents onto existing systems, treat them like smart plugins. But that just creates more mess. What actually works, they say, is to redesign the entire workflow around the AI agent (instead of people), where companies should rethink how the job gets done from start to finish. That's when the big value shows up. How big could Agentic AI be? Try between US$2.6-4.4 trillion in annual global economic value, McKinsey said. Not from one sector, but across the board: sales, customer service, software, finance, R&D, and HR. The number sounds wild, but the logic checks out if you think about it. If you can automate high-effort processes that humans have been slogging through for decades, and do it reliably, at scale, it changes everything. McKinsey also pointed out that businesses chasing that value need to get three things right. First, agents need to work across systems, no more data trapped in silos. Second, they need proper governance and monitoring so they don't go rogue. And third, they need to be usable by non-tech teams. McKinsey's final message is this: the companies that win in this next wave won't be the ones with the flashiest demos. They will be the ones that actually redesign how work happens, build AI into the fabric of their org, and scale it with discipline. Betting on the future of work Decidr Ai Industries (ASX:DAI) may be the only ASX-listed company that's building exactly what McKinsey describes. The company is creating teams of smart agents designed to handle real work, like chatting with potential customers, managing the hiring process, or crafting marketing content. Imagine this: You say, 'Plan a marketing campaign that targets small businesses in Queensland next quarter.' An agentic AI won't just hand you a slide deck. It will dig into your database and come up with ideas. It will also run A/B tests, adjust the pitch, schedule social posts. Over in marketing, a content agent crafts blog posts, writes ads, optimises for SEO. Meanwhile, a recruitment agent hunts candidates, schedules interviews, and manages the screening process. All of these agents are launched through Decidr's Onboarding Studio, so businesses can get full AI teams up and running in minutes, no coding needed. Even better, these agents plug right into the systems companies already use: email, finance tools, HR software, working together without manual handoffs or data silos. On a mission Decidr is also positioning itself right where McKinsey says the biggest shift is about to happen. Industries that are drowning in complexity – like recruitment, onboarding, sales, and content – are crying out for smarter automation. Its platform has landed right in that sweet spot, and since rebranding from Live Verdure in March, Decidr's been busy. Like teaming up with AWS Venture Studio for cloud grunt and LLM access, cutting deals with Sugarwork in the US, and automating onboarding for 1500 SME listings via SBX back home. It's also joined the Tech Council of Australia, a flag in the sand for AI governance. Meanwhile, CareerOne's revenue has doubled in June using Decidr's agents, with job match performance up 8x. And Growth Faculty is already earning subscriptions from Decidr-powered AI Mentors. Bringing the future forward There are still hurdles ahead for Decidr, of course. Expanding into the US will test trust and traction. Regulatory frameworks are evolving, and public markets haven't exactly been kind to early-stage tech. And McKinsey has been blunt: the real value in agentic AI won't come from novelty or hype, but from rethinking how work actually gets done. Decidr, though, isn't pitching some distant future. It's already rolling out the kind of agentic systems that industries are scrambling to adopt right now.

Compute Labs Offers Investor Opportunities In AI Data Center Buildout
Compute Labs Offers Investor Opportunities In AI Data Center Buildout

Forbes

time5 hours ago

  • Business
  • Forbes

Compute Labs Offers Investor Opportunities In AI Data Center Buildout

stack of one hundred dollars notes I recently talked with a company that is giving investors an opportunity to participate in the AI economy, particularly in AI infrastructure. This will give investors a way to participate in the wealth generated with the build out of AI data centers. Ian Buck, VP and General manager of Nvidia Hyperscale and HPC said that, 'every dollar a cloud provider spends on buying a GPU, they make it back at $5 over four years.' He also said that inference is even more profitable with $7 back for every dollar spent. Hence this could be a significant investment opportunity. This is reinforced by a recent 2025 McKinsey report which says that by 2030, data centers are projected to require $6.7 trillion worldwide to keep pace with the demand for compute power. Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures, while those powering traditional IT applications are projected to require $1.5 trillion in capital expenditures. Overall, that's nearly $7 trillion in capital outlays needed by 2030 Nvidia says that it is helping build 'AI factories' around the globe—data centers purpose-built for training and deploying the world's most advanced AI models. But while Nvidia is selling the picks and shovels of the AI gold rush, investors have limited ways to participate directly in the infrastructure powering it without buying Nvidia stock or funding billion-dollar data centers. Compute Labs says it is offering to transform enterprise GPUs into yield-bearing digital assets. The company gives public investors a way to earn income from real infrastructure—the same kind of infrastructure Nvidia calls the backbone of the future. The company says that its first $1M GPU vault is backed by NVIDIA H200s, running live AI workloads from vetted enterprise clients. Investors receive yields from these investments, while Compute Labs handles deployment, management and compliance—offering transparency, principal protections and operational oversight. I spoke with Nikolay Filichkin and Albert Zhang, two of the founders of Compute Labs about what they are doing and more broadly how this could change the way that smaller organizations and investors could get involved in the AI boom. They told me that they had talked with data center operators on ways that they may help them get capital to invest in AI related equipment like GPUs or memory. This is done as an investment opportunity that can turn devices such as GPUs into tradable digital assets. They have been doing this since last March and with the current high demand for AI they have been having a 20-50% or higher yields on investments. They said that they are confident that this high demand will continue for another 2-3 years. The image below shows the investor syndication structure they are using for these data center investments. Compute Labs Investment Syndication Strategy In addition to Nvidia GPUs they are also working with AMD on investments into their GPUS and with other application specific data center chips for AI applications. They are looking to expand their model into quantum computing resources as well, basically into any data center assets that can yield a good return. They are also looking to tokenize everything that makes a data center go, including energy. They said that this is a new way to fund data center growth and may be of particular interest to smaller and less capitalized data centers. It seems to me that they are essentially turning data center capital investments into operating expenses for the data centers with regular returns to the investors. The company says that by aggregating on both hardware and capital it can get discounts on the hardware and lower interest on money to purchase the capital assets. They also said that at end of life for AI hardware assets investors can continue to earn returns on the secondary hardware market or they can trade for another investment using their existing assets. Compute Labs has opened the door for investors to participate in the build out of AI data centers with a promise of significant returns and a way for data center operators to turn major capital investments into operating expenses.

Are Agentic AI Systems Quietly Taking Over Enterprises? 3 Ways To Keep Humans In The Loop
Are Agentic AI Systems Quietly Taking Over Enterprises? 3 Ways To Keep Humans In The Loop

Forbes

time19 hours ago

  • Business
  • Forbes

Are Agentic AI Systems Quietly Taking Over Enterprises? 3 Ways To Keep Humans In The Loop

Are Agentic AI Systems Quietly Taking Over Enterprises? Imagine a future where AI agents run the majority of your company's daily operations by handling complex tasks, managing workflows, and resolving customer issues around the clock, all while reporting to another AI agent manager who then reports to you. Picture reaching out to McKinsey and instead of a human consultant, being connected with a customized AI agent that provides expert insights instantly. That future is nearly here. Agentic AI is rapidly reshaping how enterprises operate. At Salesforce, these AI agents now manage 30 to 50 percent of internal workflows, and more than 85 percent of customer service inquiries are resolved by AI, dramatically easing the burden on human staff. CEO Marc Benioff, known for his bold branding, has even called himself the 'Taylor Swift of Tech,' comparing Salesforce's AI transformation to the sweeping impact of Swift's multi-era world tours. Salesforce isn't alone. McKinsey & Company has introduced its own "Lilli" agents, AI tools capable of conducting deep research, generating data-driven insights, and producing presentation-ready charts and slides. As these systems evolve, they are poised to take over tasks traditionally assigned to junior consultants, potentially reshaping the firm's hiring needs and operational structure. The broader implication? We are moving toward a future where firms like McKinsey, BCG, Bain, or Deloitte might offer AI agents as the first point of contact—consultants that never sleep, scale instantly, and continually improve. The rise of enterprise AI agents is no longer speculative; it's unfolding now, and fast. But how far will it go? Could AI agents eventually displace 80-90% of today's workforce within these firms? Will humans have a meaningful role in workflows as automation scales? These are not just hypothetical questions—they are strategic imperatives. As agentic AI begins to power everything from back-office functions to client-facing operations, the challenge is clear: how do we keep humans meaningfully in the loop? Here are three strategies to ensure that, even in an era of hyper-automation, the human touch remains essential to enterprise success. 1. Design Human-In-The-Loop (HITL) Agentic Ai Systems with Unique Human Roles As Agentic AI systems increasingly take on core operational functions, it is imperative for enterprises to reimagine organizational roles and workflows to ensure continued and meaningful human involvement. Rather than assigning humans to tasks that AI can readily perform, the focus should shift toward areas where human expertise remains indispensable, such as strategic decision-making, ethical governance, nuanced client engagement, and cross-functional leadership. To enable this transition, organizations must design and implement robust human-in-the-loop (HITL) frameworks. These systems embed human oversight into AI-driven processes, particularly in high-impact areas like talent acquisition, financial decision-making, legal analysis, and healthcare. For instance, in a consulting environment, an AI agent might generate an initial draft of a client strategy or market report. However, it is the responsibility of the consultant to interpret the findings, tailor the insights to the client's specific context, and ensure overall quality and relevance. Supporting these evolving workflows are a new wave of hybrid roles such as AI strategy leads, human-AI collaboration specialists, and HITL analysts. These roles serve as essential interfaces between AI systems and business outcomes, safeguarding against errors while optimizing the value AI delivers. By embedding human judgment, accountability, and strategic alignment into AI-enabled operations, organizations can unlock the full promise of Agentic AI while maintaining human agency at the core of enterprise decision-making. 2. Build an AI-Ready Workforce for Human-AI Collaboration As Agentic AI becomes increasingly integrated into enterprise operations, it is essential to invest in up skilling the workforce in both AI literacy and systems thinking. Employees need a clear understanding of how AI systems function, where they create value, and what their limitations are. This knowledge allows them to interpret AI outputs thoughtfully, identify potential risks or biases, and collaborate with these systems effectively. When AI is approached as a collaborative partner rather than a mysterious or autonomous tool, organizations can foster greater adoption, trust, and alignment with business goals. For example, in financial services, portfolio managers who are trained in AI concepts can use algorithmic tools to enhance investment strategies while still applying their own market expertise for final decisions. In marketing, teams can combine AI-powered customer segmentation with human creativity to develop more tailored and impactful campaigns. By cultivating these skills across functions, companies create a workforce that is not only technically capable but also strategically positioned to guide and govern the responsible use of AI throughout the organization. 3. Establish AI Governance and Escalation Frameworks to Ensure Accountability As AI systems are increasingly deployed in critical business functions, it is essential to establish strong governance and escalation frameworks to maintain oversight and accountability. These protocols ensure that when AI-generated recommendations conflict with legal standards, ethical principles, or stakeholder expectations, human experts can intervene. For example, in financial services, if an AI system produces a credit decision that appears biased, compliance officers should have the authority to pause and review the process before action is taken. To support this oversight, organizations should form dedicated structures such as AI ethics boards or enterprise-level agent councils. These groups evaluate high-impact use cases, assess risk, and define clear escalation paths for teams interacting with AI systems. By embedding governance into the AI lifecycle, enterprises can scale intelligent automation responsibly while preserving human judgment and organizational integrity. Leading Through the Age of AI Agents Agentic AI is no longer a vision of the future; it is an active force reshaping the enterprise landscape. As organizations embrace these powerful systems, the challenge is not simply technological but deeply human. Success will depend on how well companies design for collaboration between intelligent agents and the people who guide them. By embedding thoughtful human oversight, investing in AI literacy, and governing automation with intention, enterprises can unlock the full potential of agentic AI while ensuring that people remain at the heart of innovation and decision-making.

An unlikely alliance: Drayton and Mackenzie, by Alexander Starritt, reviewed
An unlikely alliance: Drayton and Mackenzie, by Alexander Starritt, reviewed

Spectator

timea day ago

  • Entertainment
  • Spectator

An unlikely alliance: Drayton and Mackenzie, by Alexander Starritt, reviewed

Alexander Starritt has form with satire. His 2017 debut The Beast skewered the modern tabloid press, drawing comparisons with Evelyn Waugh's Scoop. For his third novel, Drayton and Mackenzie, he is back at it, mercilessly mocking everything from Oxbridge and management consultants to tech bros and new parents in a story that hinges on whether two unlikely friends can make a success of their tidal energy start-up. It's more fun that it sounds. The narrative opens in the early 2000s with James Drayton – someone who gets his kicks by finishing his maths A-level exam in 20 minutes and who finds undergraduate life disappointingly basic. 'He supposed he'd been naive to think of university as concerned with intellect… At this level, Oxford was just an elementary course in information-processing, a training school for Britain's future lawyers, politicians and administrators,' writes Starritt, using the omniscient voice. Lest this seem too obnoxious, James is self-aware enough to realise that finishing his exams so quickly meant 'he would have to leave the exam room alone while the rest of his class stayed inside together'. One of Starritt's many skills is how he ratchets up the poignancy, creating real characters rather than caricatures. The yang to Drayton's yin comes in the form of Roland Mackenzie, an Oxford slacker who scrapes a 2:2. They're at the same college but barely clock each other. Later, when James is the subject of articles and interviews, he will be asked if it's true that they were both in the same rowing boat. 'James didn't notice him at the time.' After Roland takes a gap year or two teaching in India, he somehow winds up at McKinsey, working alongside James. Roland finds it catastrophically boring. 'But even that he quite enjoyed, since the boringness was so authentic, like going to New York and it being just like the movies.' As the duo strike out on their own, seeking to disrupt electricity generation with a scheme to turn tidal power into light, at least initially, Starritt's granular detail over 500-odd pages skirted a similarly fine line between boring me and impressing me with its authenticity. There are even cameos from the central bankers Ben Bernanke and Mario Draghi as the world economy tanks, although the step-by-step exposition in these chapters is overkill. What with the emotion of the escalating bromance and making James someone 'who hasn't read a novel since university', Starritt goes all out to hook the same sort of elusive male reader who lapped up Andrew O'Hagan's tear-jerking Mayflies. And good luck to him. He certainly hooked me.

Only Critical Thinking Ensures AI Makes Us Smarter, Not Dumber
Only Critical Thinking Ensures AI Makes Us Smarter, Not Dumber

Forbes

timea day ago

  • Business
  • Forbes

Only Critical Thinking Ensures AI Makes Us Smarter, Not Dumber

AI needs more human thinking, not less, but are we giving it the opposite? We're entering a new era where artificial intelligence can generate content faster than we can apply critical thinking. In mere seconds, AI can summarize long reports, write emails in our tone and even generate strategic recommendations. But while these productivity gains are promising, there's an urgent question lurking beneath the surface: Are we thinking less because AI is doing more? The very cognitive skills we need most in an AI-powered world are the ones that the tools may be weakening. When critical thinking takes a back seat, the consequences are almost comical—unless it's your company making headlines. These real-world breakdowns show what can happen when critical thinking is absent. As AI models get more advanced and powerful, they exhibit even higher rates of hallucinations, making human supervision even more critical. And yet, a March 2025 McKinsey study found that only 27% of organizations reported reviewing 100% of generative AI outputs. With so much of the focus on the technology itself, many organizations clearly don't yet understand the growing importance of human oversight. Clarifying what critical thinking is While most people agree that critical thinking is essential for evaluating AI, there's less agreement on what it actually means. The term is often used as a catch-all term for a wide range of analytical skills—from reasoning and logic to questioning and problem-solving—which can feel fuzzy or ambiguous. At its core, critical thinking is both a mindset and a method. It's about questioning what we believe, examining how we think and applying tools such as evidence and logic to reach better conclusions. I define critical thinking as the ability to evaluate information in a thoughtful and disciplined manner to make sound judgments instead of accepting things at face value. As part of researching this article, I spoke with Fahed Bizzari, Managing Partner at Bellamy Alden AI Consulting, who helps organizations implement AI responsibly. He described the ideal mindset as 'a permanent state of cautiousness' where 'you have to perpetually be on your guard to take responsibility for its intelligence as well as your own.' This mindset of constant vigilance is essential, but it needs practical tools to make it work in daily practice. The GPS Effect: What happens when we stop thinking This need for vigilance is more urgent than ever. A troubling pattern has emerged where researchers are finding that frequent AI use is linked to declining critical thinking skills. In a recent MIT study, 54 participants were assigned to write essays using one of three approaches: their own knowledge ('brain only'), Google Search, or ChatGPT. The group that used the AI tool showed the lowest brain engagement, weakest memory recall and least satisfaction with their writing. This cognitive offloading produced essays that were homogeneous and 'soulless,' lacking originality, depth and critical engagement. Ironically, the very skills needed to assess AI output—like reasoning, judgment, and skepticism—are being eroded or suppressed by overreliance on the technology. It's like your sense of direction slowly fading because you rely on GPS for every trip—even around your own neighborhood. When the GPS fails due to a system error or lost signal, you're left disoriented. The skill you once had has atrophied because you outsourced your navigation skills to the GPS. Bizzari noted, 'AI multiplies your applied intelligence exponentially, but in doing so, it chisels away at your foundational intelligence. Everyone is celebrating the productivity gains today, but it will eventually become a huge problem.' His point underscores a deeper risk of overdependence on AI. We don't just make more mistakes—we lose our ability to catch them. Why fast thinking isn't always smart thinking We like to think we evaluate information rationally, but our brains aren't wired that way. As psychologist Daniel Kahneman explains, we tend to rely on System 1 thinking, which is fast, automatic and intuitive. It's efficient, but it comes with tradeoffs. We jump to conclusions and trust whatever sounds credible. We don't pause to dig deeper, which makes us especially susceptible to AI mistakes. AI tools generate responses that are confident, polished and easy to accept. They give us what feels like a good answer—almost instantly and with minimal effort. Because it sounds authoritative, System 1 gives it a rubber stamp before we've even questioned it. That's where the danger lies. To catch AI's blind spots, exaggerations or outright hallucinations, we must override that System 1 mental reflex. That means activating System 2 thinking, which is the slower, more deliberate mode of reasoning. It's the part of us that checks sources, tests assumptions and evaluates logic. If System 1 is what trips us up with AI, System 2 is what safeguards us. The Critical Five: A framework for turning passengers into pilots You can't safely scale AI without scaling critical thinking. Bizzari cautioned that if we drop our guard, AI will become the pilot—not the co-pilot—and we become unwitting passengers. As organizations become increasingly AI-driven, they can't afford to have more passengers than pilots. Everyone tasked with using AI—from analysts to executives—needs to actively guide decisions in their domains. Fortunately, critical thinking can be learned, practiced and strengthened over time. But because our brains are wired for efficiency and favor fast, intuitive System 1 thinking, it's up to each of us to proactively engage System 2 to spot flawed logic, hidden biases and overconfident AI responses. Here's how to put this into practice. I've created The Critical Five framework, which breaks critical thinking into five key components, each with both a mindset and a method perspective: To make critical thinking less ambiguous, The Critical Five framework breaks it down into five key ... More components. Just ASK: A quick AI check for busy minds While these five skills provide a solid foundation for AI-related critical thinking, they don't operate in a vacuum. Just as pilots must adapt their approach based on weather conditions, aircraft type and destination, we must be able to adapt our critical thinking skills to fit different circumstances. Your focus and level of effort will be shaped by the following key factors: Critical thinking doesn't happen in a vacuum. It is shaped by an individual's domain expertise, org ... More culture and time constraints. Recognizing that many scenarios with AI output may not demand an in-depth review, I've developed a quick way of injecting critical thinking into daily AI usage. This is particularly important because, as Bizzari highlighted, "Current AI language models have been designed primarily with a focus on plausibility, not correctness. So, it can make the biggest lie on earth sound factual and convincing." To counter this exact problem, I created a simple framework anyone can apply in seconds. Just ASK: For quick evaluations, focus on questioning the assumptions, sources and your objectivity. To show this approach in action, I'll use an example where I've prompted an AI tool to provide a marketing strategy for my small business. This quick evaluation could reveal potential blind spots that might otherwise turn promising AI recommendations into costly business mistakes, like a misguided marketing campaign. The future of AI depends on human thinking If more employees simply remember to 'always ASK before using AI output,' your organization can begin building a culture that actively safeguards against AI overreliance. Whether using the full Critical Five framework or quick ASK method, people transform from passive passengers into engaged pilots who actively steer how AI is used and trusted. AI can enhance our thinking, but it should never replace it. Left unchecked, AI encourages shortcuts that lead to the costly mistakes we saw earlier. Used wisely, it becomes a powerful, strategic partner. This isn't about offloading cognition. It's about upgrading it—by pairing powerful tools with thoughtful, engaged minds. In the end, AI's value won't come from removing us from the process—it will come from how disciplined we are in applying critical thinking to what it helps us generate.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store