
Beyond The Tech Supercycle: Socio-Cultural Drivers As Strategy Multipliers
Cropped shot of computer programmers working on new code
We are deep in a technology supercycle—a period of rapid, relentless acceleration that will eventually tip toward transformation or disruption. But for now, it's all forward motion. Full throttle. Fueled by the convergence of AI, biotech and sensor technology, this moment is not about incremental change. It's about seismic, society-shaping transformation.
If you're searching for historical parallels, think of the internet boom of the 1990s or post-war industrial growth in Japan. But even those had warning signs. This wave is arriving faster, hitting harder and leaving less room for slow adaptation.
Futurist Amy Webb and the team at the Future Today Strategy Group recently released a comprehensive report tracking hundreds of converging tech trends. It's dense, thorough, a little unnerving – but an essential read. The report doesn't just map the future. It highlights how that future is already taking shape, often without our awareness.
This is where it gets more uncomfortable. Many leaders are still operating with the mindset: 'We'll deal with it when we get there.' But what if the bridge you're counting on to get from now to next has already disappeared? What if you're standing at the edge of a chasm?
Artificial General Intelligence (AGI) is advancing faster than most institutions or leaders can process. These systems won't just enhance workflows—they'll rewire entire industries, redefine roles and shift core relationships between people and machines.
At SXSW 2025 in Austin, Webb gave a talk that felt more like a wake-up call. She walked through some of the most unsettling innovations already underway: multi-agent systems that work without human input, AI models speaking to each other in unknown languages—what she called 'droid speak'—and the convergence of biology and technology in ways we once thought were science fiction.
Still not unsettled? Consider this: we now have commercial computers made using living human neurons. Webb called them 'the first living machines.'
This is no longer just artificial intelligence. It's living intelligence. AI systems that may soon be organic, or at the very least, deeply integrated with human nervous systems and tissue. The implications aren't just technical—they're societal, ethical and deeply human.
The speed of change is thrilling. But it also raises a sobering question: are we actually ready for what's already arrived?
These aren't speculative futures. They're here—just unevenly distributed. And they're not just technological shifts. They are convergent changes across society, technology and culture.
We can't view this as a purely tech-driven phenomenon. In reality, technology and society are in a continuous feedback loop. Consider how the pandemic accelerated remote work through tools like Zoom and Teams. But it was changing human expectations around autonomy, mental health and flexibility that reshaped those same tools in return. Features like asynchronous collaboration and well-being integrations weren't just technical upgrades—they were human responses baked into design.
We're not just reacting to change anymore. We're co-creating it. And yet, most leadership teams still aren't seeing the full ecosystem. Leadership today isn't about prediction. It's about preparation—not just technologically - but structurally, culturally and behaviorally.
Most organizations are still working from static strategic plans that stretch five years into the future. They're beautifully designed and thoroughly approved—but quickly irrelevant.
In fast-moving environments, frequent strategy revisions (often triggered by market or tech shifts) can be mistaken for agility. But more often, they signal misalignment or a fundamental disconnect from how the world is actually changing. When the plan keeps changing, trust erodes. People don't just lose faith in the strategy—they start questioning leadership's ability to deliver at all.
In this environment, strategy can't be a fixed document. It has to be a living system. It should evolve with weak signals, cultural shifts and emergent behavior—both inside and outside the organization.
Think chaos theory: a butterfly flaps its wings in Brazil, and a tornado hits Texas. Now swap that for a viral TikTok video that reshapes consumer sentiment overnight. Or a single Gen Z activist redefining a brand with one viral post.
Dynamic strategy needs more than financial modeling or AI fluency. It also requires cultural insight and purpose/values alignment. Unilever, for instance, links its strategy directly to sustainability and social impact—not just to appear progressive, but because modern relevance demands ethical clarity. And it's performance-driven by design. As their charter puts it, 'Ringing the alarm and setting long-term ambitions isn't good enough anymore. Now is the time to focus on delivering impact by making sustainability progress integral to business performance.'
Resilience, trust, and cultural relevance are now core competitive advantages. Tech fluency alone isn't enough—and over-indexing on it leaves organizations exposed.
Strategy doesn't live in PowerPoint decks. It lives in the daily decisions made by people across your organization. And yet Gallup reports that only about three in 10 leaders and managers say they have discussed with each team member how changes in their organization will affect them specifically. That's not a communications issue—it's a listening issue.
Your company is shaped more by Reddit threads, Glassdoor reviews, and TikTok trends than by executive roundtables. If you're not listening across those platforms, you're not listening to reality. Leaders need real-time feedback loops.
complex difficult task or question un business, problem concept
Host open forums. Run anonymous pulse checks. Conduct digital ethnography. Invite dissent, reward curiosity and make it safe for truth to travel upward. Invest as much in cultural adaptability and intelligence as you do in technology upgrades—and listen to voices beyond the tech echo chamber. In a conversation with me, Piyush Gupta, the ex-CEO of DBS Bank and an early pioneer of AI in banking, spoke directly to this convergence. DBS didn't just hire data scientists. They brought in anthropologists and ethnographers, because understanding culture is just as critical as understanding code.
The lesson for leaders? If your innovation strategy doesn't include human insight, it's incomplete. Technology may drive the future; but culture determines whether you can actually arrive there.
In a world this volatile, planning alone is no longer sufficient. Simulation must become a core capability.
Red teaming, scenario planning, and design fiction—once used by military or innovation labs—now belong in every leadership toolkit.
Modern simulation includes:
In a recent leadership session I facilitated, we used appreciative inquiry to explore what success could look like a decade from now. I posed a question known as The Oracle Prompt:
'It's the year 2035. Your organization is thriving. What happened between now and then to make it possible?'
That one question unlocked an entirely new go-to-market strategy. But more importantly, it aligned the team around a vision that felt both bold and attainable. It shifted the conversation from fear to foresight.
Simulation builds instinct and muscle memory. It allows leaders to practice for change before it becomes real. Sun Tzu once said, 'Appear at points which the enemy must hasten to defend.' Today, the threat isn't your competitor. It's inertia. It's the change you chose not to prepare for.
Ask yourself:
If you're unsure about more than two, you may be reacting to change instead of leading it.
Too many organizations are waiting for the future to become clear before taking action. That's a dangerous delay. Leadership now means moving ahead of what's obvious. Acting on signals before they're mainstream. Leaping, even when the bridge isn't there yet.
Wayne Gretzky famously said, 'Skate to where the puck is going.' But in this cycle, that's no longer enough. You need to skate to where others haven't even imagined the puck could go.
Because the future won't announce itself. It won't arrive with a roadmap. It will reward those who are already there—ready to adapt, ready to lead, and ready to align technology with what makes us human.
So don't wait for a supercycle to appear or for a bridge to conveniently show up. Build your capacity to leap.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
James Webb telescope spots the earliest galaxy ever recorded
The James Webb Space Telescope spotted the earliest galaxy ever recorded. Looking through space is also looking through time: We see objects as they were when the light left them, so when we look at the Sun, we see it as it was eight minutes ago. Newly discovered galaxy MoM z14, which lies a mere 13.53 billion light years away, is also the most distant object ever sighted, so it appears as it was just 280 million years after the Big Bang, when the universe was only 2% of its current age. As well as being an impressive technical feat, the discovery challenges physicists' models of galaxy formation, implying that they formed more quickly than previously believed.


WIRED
5 hours ago
- WIRED
Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish
Jun 4, 2025 6:00 AM Demis Hassabis says that systems as smart as humans are almost here, and we'll need to radically change how we think and behave. If you buy that artificial intelligence is a once-in-a-species disruption, then what Demis Hassabis thinks should be of vital interest to you. Hassabis leads the AI charge for Google, arguably the best-equipped of the companies spending many billions of dollars to bring about that upheaval. He's among those powerful leaders gunning to build artificial general intelligence, the technology that will supposedly have machines do everything humans do, but better. None of his competitors, however, have earned a Nobel Prize and a knighthood for their achievements. Sir Demis is the exception—and he did it all through games. Growing up in London, he was a teenage chess prodigy; at age 13 he ranked second in his age group worldwide. He was also fascinated by complex computer games, both as an elite player and then as a designer and programmer of legendary titles like Theme Park . But his true passion was making computers as smart as wizards like himself. He even left the gaming world to study the brain and got his PhD in cognitive neuroscience in 2009. A year later he began the ultimate hero's quest—a journey to invent artificial general intelligence through the company he cofounded, DeepMind. Google bought the company in 2014, and more recently merged it with a more product-oriented AI group, Google Brain, which Hassabis heads. Among other things, he has used a gamelike approach to solve the scientific problem of predicting the structure of a protein from its amino acid sequence—AlphaFold, the innovation that last year earned him the chemistry Nobel. Now Hassabis is doubling down on perhaps the biggest game of all—developing AGI in the thick of a brutal competition with other companies and all of China. If that isn't enough, he's also CEO of an Alphabet company called Isomorphic, which aims to exploit the possibilities of AlphaFold and other AI breakthroughs for drug discovery. When I spoke to Hassabis at Google's New York City headquarters, his answers came as quickly as a chatbot's, crisply parrying every inquiry I could muster with high spirits and a confidence that he and Google are on the right path. Does AI need a big breakthrough before we get to AGI? Yes, but it's in the works! Does leveling up AI court catastrophic perils? Don't worry, AGI itself will save the day! Will it annihilate the job market as it exists today? Probably, but there will always be work for at least a few of us. That—if you can believe it—is optimism. You may not always agree with what Hassabis has to say, but his thoughts and his next moves matter. History, after all, will be written by the winners. This interview was edited for clarity and concision. When you founded DeepMind you said it had a 20-year mission to solve intelligence and then use that intelligence to solve everything else. You're 15 years into it—are you on track? We're pretty much dead on track. In the next five to 10 years, there's maybe a 50 percent chance that we'll have what we define as AGI. What is that definition, and how do we know we're that close? There's a debate about definitions of AGI, but we've always thought about it as a system that has the ability to exhibit all the cognitive capabilities we have as humans. Eric Schmidt, who used to run Google, has said that if China gets AGI first, then we're cooked, because the first one to achieve it will use the technology to grow bigger and bigger leads. You don't buy that? It's an unknown. That's sometimes called the hard-takeoff scenario, where AGI is extremely fast at coding future versions of themselves. So a slight lead could in a few days suddenly become a chasm. My guess is that it's going to be more of an incremental shift. It'll take a while for the effects of digital intelligence to really impact a lot of real-world things—maybe another decade-plus. Since the hard-takeoff scenario is possible, does Google believe it's existential to get AGI first? It's a very intense time in the field, with so many resources going into it, lots of pressures, lots of things that need to be researched. We obviously want all of the brilliant things that these AI systems can do. New cures for diseases, new energy sources, incredible things for humanity. But if the first AI systems are built with the wrong value systems, or they're built unsafely, that could be very bad. There are at least two risks that I worry a lot about. One is bad actors, whether individuals or rogue nations, repurposing AGI for harmful ends. The second one is the technical risk of AI itself. As AI gets more powerful and agentic, can we make sure the guardrails around it are safe and can't be circumvented. Only two years ago AI companies, including Google, were saying, 'Please regulate us.' Now, in the US at least, the administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? The idea of smart regulation makes sense. It must be nimble, as the knowledge about the research becomes better and better. It also needs to be international. That's the bigger problem. If you reach a point where progress has outstripped the ability to make the systems safe, would you take a pause? I don't think today's systems are posing any sort of existential risk, so it's still theoretical. The geopolitical questions could actually end up being trickier. But given enough time and enough care and thoughtfulness, and using the scientific method … If the time frame is as tight as you say, we don't have much time for care and thoughtfulness. We don't have much time. We're increasingly putting resources into security and things like cyber and also research into, you know, controllability and understanding these systems, sometimes called mechanistic interpretability. And then at the same time, we need to also have societal debates about institutional building. How do we want governance to work? How are we going to get international agreement, at least on some basic principles around how these systems are used and deployed and also built? How much do you think AI is going to change or eliminate people's jobs? What generally tends to happen is new jobs are created that utilize new tools or technologies and are actually better. We'll see if it's different this time, but for the next few years, we'll have these incredible tools that supercharge our productivity and actually almost make us a little bit superhuman. If AGI can do everything humans can do, then it would seem that it could do the new jobs too. There's a lot of things that we won't want to do with a machine. A doctor could be helped by an AI tool, or you could even have an AI kind of doctor. But you wouldn't want a robot nurse—there's something about the human empathy aspect of that care that's particularly humanistic. Tell me what you envision when you look at our future in 20 years and, according to your prediction, AGI is everywhere? If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030. I'm skeptical. We have unbelievable abundance in the Western world, but we don't distribute it fairly. As for solving big problems, we don't need answers so much as resolve. We don't need an AGI to tell us how to fix climate change—we know how. But we don't do it. I agree with that. We've been, as a species, a society, not good at collaborating. Our natural habitats are being destroyed, and it's partly because it would require people to make sacrifices, and people don't want to. But this radical abundance of AI will make things feel like a non-zero-sum game— AGI would change human behavior? Yeah. Let me give you a very simple example. Water access is going to be a huge issue, but we have a solution—desalination. It costs a lot of energy, but if there was renewable, free, clean energy [because AI came up with it] from fusion, then suddenly you solve the water access problem. Suddenly it's not a zero-sum game anymore. If AGI solves those problems, will we become less selfish? That's what I hope. AGI will give us radical abundance and then—this is where I think we need some great philosophers or social scientists involved—we shift our mindset as a society to non-zero sum. Do you think having profit-making companies drive this innovation is the right way to go? Capitalism and the Western democratic systems have so far been proven to be the best drivers of progress. Once you get to the post-AGI stage of radical abundance, new economic theories are required. I'm not sure why economists are not working harder on this. Whenever I write about AI, I hear from people who are intensely angry about it. It's almost like hearing from artisans displaced by the Industrial Revolution. They feel that AI is being foisted on the public without their approval. Have you experienced that pushback and anger? I haven't personally seen a lot of that. But I've read and heard a lot about that. It's very understandable. This will be at least as big as the Industrial Revolution, probably a lot bigger. It's scary that things will change. On the other hand, when I talk to people about why I'm building AI—to advance science and medicine and understanding of the world around us—I can demonstrate it's not just talk. Here's AlphaFold, a Nobel Prize–winning breakthrough that can help with medicine and drug discovery. When they hear that, people say of course we need that, it would be immoral not to have that if it's within our grasp. I would be very worried about our future if I didn't know something as revolutionary as AI was coming, to help with those other challenges. Of course, it's also a challenge itself. But it can actually help with the others if we get it right. You come from a gaming background—how does that affect what you're doing now? Some of that training I had when I was a kid, playing chess on an international stage, the pressure was very useful training for the competitive world that we're in. Game systems seem easier for AI to master because they are bound by rules. We've seen flashes of genius in those arenas—I'm thinking of the surprising moves that AI systems pulled off in various games, like the Hand of God in the Deep Blue chess match, and Move 37 in the AlphaGo match. But the real world is way more complex. Could we expect AI systems to make similar non-intuitive, masterful moves in real life? That's the dream. Would they be able to capture the rules of existence? That's exactly what I'm hoping for from AGI—a new theory of physics. We have no systems that can invent a game like Go today. We can use AI to solve a math problem, maybe even a Millennium Prize problem. But can you have a system come up with something as compelling as the Riemann hypothesis? No. That requires true inventive capability, which I think the systems don't have yet. It would be mind-blowing if AI was able to crack the code that underpins the universe. But that's why I started on this. It was my goal from the beginning, when I was a kid. To solve existence? Reality. The nature of reality. It's on my Twitter bio: 'Trying to understand the fundamental nature of reality.' It's not there for no reason. That's probably the deepest question of all. We don't know what the nature of time is, or consciousness and reality. I don't understand why people don't think about them more. I mean, this is staring us in the face. Did you ever take LSD? That's how some people get a glimpse of the nature of reality. No. I don't want to. I didn't do it like that. I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science. I'm too worried about the effects on the brain, I've done too much neuroscience. I've sort of finely tuned my mind to work in this way. I need it for where I'm going. This is profound stuff, but you're also charged with leading Google's efforts to compete right now in AI. It seems we're in a game of leapfrog where every few weeks you or a competitor comes out with a new model that claims supremacy according to some obscure benchmark. Is there a giant leap coming to break out of this mode? We have the deepest research bench. So we're always looking at what we sometimes call internally the next transformer. Do you have an internal candidate for something that could be a comparable breakthrough to transformers—that could amount to another big jump in performance? Yeah, we have three or four promising ideas that could mature into as big a leap as that. If that happens, how would you not repeat the mistakes of the past? It wasn't enough for Google engineers to discover the transformers architecture, as they did in 2017. Because Google didn't press its advantage, OpenAI wound up exploiting it first and kicking off the generative AI boom. We probably need to learn some lessons from that time, where maybe we were too focused on just pure research. In hindsight we should have not just invented it, but also pushed to productionize it and scale it more quickly. That's certainly what we would plan to do this time around. Google is one of several companies hoping to offer customers AI agents to perform tasks. Is the critical problem making sure that they don't screw things up when they make some autonomous choice? The reason all the leading labs are working on agents is because they'll be way more useful as assistants. Today's models are basically passive Q and A systems. But you don't want it to just recommend your restaurant—you'd love it to book that restaurant as well. But yes, it comes with new challenges of keeping the guardrails around those agents, and we're working very hard on the security aspects, to test them prior to putting them on the web. Will these agents be persistent companions and task-doers? I have this notion of a universal assistant, right? Eventually, you should have this system that's so useful you're using it all the time, every day. A constant companion or assistant. It knows you well, it knows your preferences, and it enriches your life and makes it more productive. Help me understand something that was just announced at the I/O developer conference. Google introduced what it calls 'AI Mode' to its search page—when you do a search, you'll be able to get answers from a powerful chatbot. Google already has AI Overviews at the top of search results, so people don't have to click on links as much. It makes me wonder if your company is stepping into a new paradigm where Google fulfills its mission of organizing and accessing the world's information not through traditional search, but in a chat with generative AI. If Gemini can satisfy your questions, why search at all? There's two clear use cases. When you want to get information really quickly and efficiently and just get some facts right, and then maybe check some sources, you use AI-powered search, as you're seeing with AI Overviews. If you want to do slightly deeper searches, then AI Mode is going to be great for that. But we've been talking about how our interface with technology will be a continuous dialog with an AI assistant. Steven, I don't know if you have an assistant. I have a really cool one who has worked with me for 10 years. I don't go to her for all my informational needs. I just use search for that, right? Your assistant hasn't absorbed all of human knowledge. Gemini aspires to that, so why use search? All I can tell you is that today, and for the next two or three years, both those modes are going to be growing and necessary. We plan to dominate both. Let us know what you think about this article. Submit a letter to the editor at mail@


CNN
5 hours ago
- CNN
Google's DeepMind CEO has two worries when it comes to AI. Losing jobs isn't one of them
Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI 'jobpocalypse.' Instead of fretting over AI replacing jobs, he's worried about the technology falling into the wrong hands – and a lack of guardrails to keep sophisticated, autonomous AI models under control. 'Both of those risks are important, challenging ones,' he said in an interview with CNN's Anna Stewart at the SXSW festival in London, which takes place this week. Last week, the CEO of high-profile AI lab Anthropic had a stark warning about the future of the job landscape, claiming that AI could wipe out half of entry-level white-collar jobs. But Hassabis said he's most concerned about the potential misuse of what AI developers call 'artificial general intelligence,' a theoretical type of AI that would broadly match human-level intelligence. 'A bad actor could repurpose those same technologies for a harmful end,' he said. 'And so one big thing is… how do we restrict access to these systems, powerful systems to bad actors…but enable good actors to do many, many amazing things with it?' Hackers have used AI to generate voice messages impersonating US government officials, the Federal Bureau of Investigation said in a May public advisory. A report commissioned by the US State Department last year found that AI could pose 'catastrophic' national security risks, CNN reported. AI has also facilitated the creation of deepfake pornography — though the Take It Down Act, which President Donald Trump signed into law last month, aims to stop the proliferation of these deepfakes by making it illegal to share nonconsensual explicit images online. Hassabis isn't the first to call out such concerns. But his comments further underscore both the promise of AI and the alarm that it brings as the technology gets better at handling complex tasks like writing code and generating video clips. While AI has been heralded as one of the biggest technological advancements since the internet, it also gives scammers and other malicious actors more tools than ever before. And it's rapidly advancing without much regulation as the United States and China race to establish dominance in the field. Google removed language from its AI ethics policy website in February, pledging not to use AI for weapons and surveillance. Hassabis believes there should be an international agreement on the fundamentals of how AI should be utilized and how to ensure the technology is only used 'for the good use cases.' 'Obviously, it's looking difficult at present day with the geopolitics as it is,' he said. 'But, you know, I hope that as things will improve, and as AI becomes more sophisticated, I think it'll become more clear to the world that that needs to happen.' The DeepMind CEO also believes we're headed toward a future in which people use AI 'agents' to execute tasks on their behalf, a vision Google is working towards by integrating more AI into its search function and developing AI-powered smart glasses. 'We sometimes call it a universal AI assistant that will go around with you everywhere, help you in your everyday life, do mundane admin tasks for you, but also enrich your life by recommending you amazing things, from books and films to maybe even friends to meet,' he said. New AI models are showing progress in areas like video generation and coding, adding to fears that the technology could eliminate jobs. 'AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it,' Anthropic CEO Dario Amodei told CNN just after telling Axios that AI could axe entry-level jobs. In April, Meta CEO Mark Zuckerberg said he expects AI to write half the company's code by 2026. However, an AI-focused future is closer to promise than reality. AI is still prone to shortcomings like bias and hallucinations, which have sparked a handful of high-profile mishaps for the companies using the technology. The Chicago Sun-Times and the Philadelphia Inquirer, for example, published an AI-generated summer reading list including nonexistent books last month. While Hassabis says AI will change the workforce, he doesn't believe AI will render jobs obsolete. Like some others in the AI space, he believes the technology could result in new types of jobs and increase productivity. But he also acknowledged that society will likely have to adapt and find some way of 'distributing all the additional productivity that AI will produce in the economy.' He compared AI to the rise of other technological changes, like the internet. 'There's going to be a huge amount of change,' he said. 'Usually what happens is new, even better jobs arrive to take the place of some of the jobs that get replaced. We'll see if that happens this time.'