Emerging tech adoption slows amid global uncertainty
GlobalData's Tech Sentiment Polls Q1 2025 survey reveals that industry respondents now believe that six out of seven of the most significant emerging technologies will take longer to disrupt their sectors or may not at all than they previously had.
From Q4 2024 Q1 2025, the share of respondents indicating that their sector is already being disrupted by cybersecurity fell from 61% to 59% (-2pp), cloud computing from 61% to 59% (-6pp), artificial intelligence 53% to 49% (-4pp), the internet of things 48% to 44% (-4pp), robotics from 38% to 33% (-5pp) and augmented reality from 23% to 22% (-1pp). Only the metaverse saw a rise in respondents who believe that it is already disrupting sectors, although nearly half (45%) of the 354 respondents believe it never will.
Over the same period, the proportion of respondents indicating that their sector will be disrupted at some point in the next decade or not at all by cybersecurity rose from 38% to 41% (+3pp), from 38% to 47% (+9pp) for cloud computing, from 47% to 52% for artificial intelligence (+5pp), from 52% to 56% for the internet of things (+4pp), from 62% to 67% (+5pp) for robotics and from 77% to 80% (+3pp) for augmented reality. The metaverse saw a fall from 84% to 83%.
While the report suggests no reasons for this tempering of expectations, the survey was carried out between January and March this year, aligning with the first two and a half months of the new Trump administration in the US and the global uncertainty that has brought. An upending of the global trade system via a sweeping, stringent and scattergun tariffs agenda along with a lack of clarity about the US' intentions in relation to Russia's ongoing invasion of Ukraine are two aspects of the administration giving businesses pause for thought.
Despite this, survey respondents remain buoyant about the overall value of the technologies. A huge 91% believe that robotics will live up to all of its hype or that it has a use, 88% for cybersecurity, 89% for artificial intelligence, 87% for cloud computing, 74% for augmented reality and 71% for the internet of things. Only 26% believe as much for the metaverse, however, with 60% believing it is all hype and no substance.
"Emerging tech adoption slows amid global uncertainty" was originally created and published by Verdict, a GlobalData owned brand.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
15 minutes ago
- Fast Company
5 secrets for breaking through the entry-level job ‘glass floor'
From on-again-off-again tariffs, economic uncertainty, and layoffs, fresh graduates are in one of the toughest job markets in recent history. More than half do not have a job lined up by the time they graduate, and the unemployment rate for young degree holders is the highest it's been in 12 years, not counting the pandemic. Technological advancements are further making the situation harder, as artificial intelligence (AI) has wormed its way into the workforce, cannibalizing the number of entry-level jobs available. What's a young grad to do? I interviewed hiring managers, career advisers, and college students, and in this piece you'll learn: What out-of-work new grads need to be doing right now in their 'limbo' How to identify industries that are hiring you may never have thought of The right approach to developing AI literacy to stand out 1. Use limbo productively What several recent college grads refer to as 'limbo,' the time period between graduation and employment, is often regarded as an excruciating phase of uncertainty. Experts recommend using this time as an opportunity for gaining experience outside of traditional corporate work.


Forbes
16 minutes ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.
Yahoo
an hour ago
- Yahoo
Postman, engineer, cleaner: Are hackers sneaking into your office?
When you think of a cyber attack, most of us imagine a classic hacker—a man in a hoodie, hunched alone over his computer, accessing a company's network remotely. But that's not always the case. Despite office security desks, it's easy to disguise yourself and simply walk in, a cyber security trainer told Euronews. 'Many people, when they see a high vis top, they think: 'Oh this person's an engineer' or something like that, and then just let them walk through.' While we're all aware of cyber attacks and the increasing threat they pose to businesses—particularly in light of recent attacks on Pandora, Chanel, Adidas and Victoria's Secret—most of us significantly underestimate the physical ways our defences can be breached. Global cyber security spending is projected to reach $213 billion (€183bn) in 2025, up from $193 billion (€166bn) in 2024, according to the latest data from Gartner, Inc. Despite this, according to Cisco's 2025 cybersecurity readiness index, only 4% of organisations globally are fully prepared for modern threats. According to security experts Sentinel Intelligence, physical security is a critical blindspot in our defences, and the consequences of ignoring this attack vector can be disastrous. The physical frontline of digital security The overall cyber threat in Europe is estimated to cost €10 trillion in 2025 and it's only set to grow, according to a recent interview with software company Splunk. In terms of physical cyber attacks, the threat is real and dangerous, as shown by the World Security Report 2023. Research found that large global companies, meaning those with combined revenues of $20 trillion, reported $1tn (€860bn) in lost revenue during 2022, directly caused by physical security incidents. That could mean a hacker gaining access to your office building in order to target your digital infrastructure. Related Defence sector outpaces overall job market in Europe amid rising security priorities Businesses set to fail if cyber resilience not most important thing says Splunk strategy head Penetration testing is a common service, commissioned by business leaders to test their internal defences. If you work in a big office, it has probably happened around you, without you even knowing. Euronews Business spoke to Daniel Dilks, director of operations at Sentinel Intelligence, to learn exactly what some of their recent tests have entailed. Case 1: Tailgating & access breach at a corporate headquarters 'Sentinel operatives dressed in business attire entered the building by tailgating staff during the morning rush, carrying fake ID badges and a laptop bag to blend in. Once inside, they located an unsecured meeting room, connected to the guest Wi-Fi, and left a rogue device (a network implant),' Dilks told Euronews. Case 2: Out-of-hours lock picking & data exposure 'During off-hours, testers gained access by picking a standard euro-cylinder lock on the side door. Once inside, they accessed an unlocked filing cabinet containing printed client contracts and passwords. No alarms were triggered,' Dilks explained. And for a criminal, once they've figured out how to enter a building, they can potentially do it on numerous occasions, each time gathering more information or causing more damage. Case 3: Social engineering & credential theft simulation 'An operative posed as a contractor for the building's heating and ventilation system. After entering with a high-vis vest and fake work order, the individual was escorted into a server room by staff who believed the visit was scheduled. While inside, they photographed exposed credentials and connected a USB 'dropbox' to a workstation,' he added, explaining that it's common for penetration testers to leave USB pens scattered around offices. Many workers, in the hope of being helpful, will plug them into their computers to see who it belongs to. In a real world attack scenario, this could introduce malware directly into your company network. In all of these examples, poor physical security measures, reluctance to challenge or verify unknown people, and making basic mistakes like writing passwords on post-it notes could all lead to serious consequences. Related How long could it take a hacker to crack one of your passwords in 2025? What are the consequences of a cyber attack? Though it's tricky to break down the exact cost of a security breach, attacks have short- and long-term consequences for a business. There are the initial direct costs which could be linked to physical damage. 'Somebody manages to break in, and they sabotage your system, they basically smash it up, right? So there's a direct cost there to the actual equipment,' the cyber security expert explained. 'But if damage to the equipment means you're not able to function for several days, that's loss of business. And sometimes when a customer can't reach you several times, they may decide to go elsewhere.' The expert explained that consequences can quickly intensify if data is wiped and backups don't work, adding that organisations can crumble without their systems. Indirect costs could also have enduring ramifications. 'Let's say someone steals your data and then there's intellectual property or confidential documents and then they get leaked. What's the cost to the organisation? There's a reputational cost there, they may lose contracts when the customers lose trust in them.' Companies can also be fined for these sorts of data breaches. Related Defence sector outpaces overall job market in Europe amid rising security priorities Surprising attack vectors The cyber security expert shared some particularly surprising ways that criminals have hacked into company systems in recent years. 'There was this case where in a casino in the US, attackers gained access to the network, not through going directly through the main part of the network, but they compromised a water-regulating device in an aquarium that was connected to the system.' And whilst we might not all have aquariums in our homes and offices, smart devices can be vulnerable too. 'When smart kettles first came out, the security community was very interested,' the expert explained. 'If you go to a cyber security conference, sometimes you'll see a demo of them hacking a kettle and then extracting the WiFi password, and then using the WiFi passport to then go into a network, and many things can snowball from there' If you're running a company, it's worth identifying all the possible ways you could be attacked. Even so, the expert emphasised that while we need to exercise caution, it doesn't mean we need to be rude or unkind to strangers in the workplace out of fear. 'Just be wary and be aware. We don't need to change our nature and be unkind to everyone, but we just need to be aware that there are some malicious people out there.' Sign in to access your portfolio