
Business leaders trust AI in decision-making as UAE vision boosts confidence
MAYS IBRAHIM (ABU DHABI)Seventy-nine percent of business leaders in the UAE trust artificial intelligence (AI) to make budget decisions and manage talent functions, covering key aspects like compensation, hiring, and performance evaluation, according to a recent poll.The research, conducted by next-generation technology services provider Endava in partnership with 3Gem, surveyed 300 UAE business leaders across various industries to explore their views on AI in business, personal life, and society.The survey revealed broad confidence in the technology's capabilities, with 80% of respondents saying they would feel comfortable with AI-driven systems overseeing critical health and safety functions at work.For UAE businesses, adopting AI emerged as the top strategic priority, followed by enhancing client services and investing in other technologies.This trust extends beyond the workplace into personal decision-making, according to the study.Four in five respondents said they would rely on fully automated AI systems for major career decisions, including salary progression and timing for promotions or job changes.An equally high percentage trust AI for personal financial management, such as investment and retirement planning, as well as for automated health recommendations.The study also showed that 74% of respondents agree that access to AI is as essential for UAE society as access to utilities like water and energy. Moreover, 79% believe that successful AI implementation will significantly improve the quality of life in the country.
National Strategy The UAE is rapidly solidifying its position as a global AI leader with billion-dollar investments in AI infrastructure and innovation announced this year.This momentum appears to resonate with the public, as 46% of those surveyed feel the UAE is ahead of the world in AI adoption and usage.When asked to rank global AI leaders, the Middle East was placed third, after the US and China."Trust is the critical enabler of any AI strategy. The UAE's clear national vision, its youthful and tech-savvy demographic, and its digital-first mindset give organisations a unique opportunity to innovate with confidence," said David Boast, General Manager for UAE and Saudi Arabia at Endava."While other regions remain cautious, weighed down by debates over job displacement, our research shows that businesses in the UAE can move forward with certainty, knowing their workforces are not only ready for AI, but actively embracing it."The study's findings indicate optimism about AI's impact on the job market. Nearly 75% of business leaders say their organisations have already created or plan to create dedicated AI roles, such as 'Head of AI.'The same percentage believe AI will improve employee job satisfaction, while 71% report AI has directly boosted their company's profitability.There is also a sense of urgency: 66% of respondents think their organisation risks losing market share within two years if it fails to advance its AI capabilities, while 35% warn this could happen in less than a year.
According to Endava, around 30% of companies plan to significantly increase AI investments in 2025 compared to 2024.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Khaleej Times
2 minutes ago
- Khaleej Times
New Dubai classification to remove ‘AI stigma' in content, experts say
Would you want to know if the content you're consuming was created using AI? A new initiative in Dubai is making that level of transparency possible. Industry professionals say it could help shift how people perceive artificial intelligence in creative work. As AI tools become increasingly embedded in industries from marketing to media, Dubai has introduced a Human–Machine Collaboration (HMC) classification system. Approved by Dubai Crown Prince Sheikh Hamdan, the new system uses visual icons to indicate the extent of human and AI involvement in producing content — be it for ideation, writing, design, or other stages. The aim is to enhance transparency across industries increasingly using automation and generative AI tools. Abeer Faisal, marketing specialist at Omorfia Group, said the classification could help reshape public perception of AI by encouraging honest disclosure. 'This kind of system helps take the stigma out of using AI,' she said. 'It gives marketers and creators a way to say, yes, AI was part of the process, but in a way that made the content better, not less authentic.' The system features five main icons — from 'All Human' to 'All Machine' — and nine functional tags that clarify which part of the process involved AI collaboration. Though it doesn't assign percentages to each input, the system offers creators a structured way reflect how intelligent machines were used in the process. Abeer believes it could also raise the bar on quality. 'It might feel like an extra step at first, but it's a step in the right direction. It encourages more conscious creation and that only improves the work,' she said, adding that in trust-driven sectors like PR, such labels could offer an added layer of credibility. She added that the labels could influence how audiences engage with content. 'Personally, if I started seeing those badges on posts, I'd probably pause and look closer. If it were something emotional or storytelling-heavy and labeled 'machine-led,' I might feel a bit disconnected. "But if it was something futuristic or data-driven, I'd respect the honesty. To me, the label doesn't define the content; the intention behind it does. And that's the shift we're heading into: people won't just care what you create, they'll care how you created it," she said. Senior communications specialist Rawan Khalifa echoed the sentiment, calling the initiative 'a commitment to transparency". She said the system acknowledges the growing role of AI in creative work, while also reinforcing the importance of how content is made. 'I've seen increasing concern about the decline of critical thinking skills as AI tools grow,' she said. 'To me, that's not a call to reject technology, it's a call to build AI literacy. We now have to upskill and use AI with intention, not default to it out of convenience.' She pointed out that the push for efficiency has sometimes led to suggestions of fully automating communications tasks, something she sees as risky. 'I've heard suggestions to fully automate some communications tasks under the guise of 'efficiency,' but even the best-written prompt can't replicate human judgment,' she said. 'Real communication isn't just producing content, it's about conveying meaning, and it requires judgment, nuance, and context.' Interestingly, Rawan shared that she used AI to refine her own quote — not to replace her voice, but to sharpen it. 'For communicators like myself, this doesn't just offer transparency, it protects the integrity of our work,' she said. However, not everyone expects widespread adoption right away. Nadine Al Suwaidi, a UAE-based digital strategist, said while the system is a positive step in principle, it may face hesitation in fast-paced commercial settings. 'The line between 'machine-assisted' and 'machine-led' can be blurry, and there's a lot of subjectivity,' she said. 'Unless there's clear incentive or pressure from clients or regulators, I don't think brands will rush to label their content this way.' She added that while the system may work well for government or academic institutions where documentation is critical, commercial teams are often more focused on timelines, performance, and simplicity. While the system is currently optional, Sheikh Hamdan has directed Dubai Government entities to begin adopting it in their research and knowledge-based work.


Arabian Post
9 hours ago
- Arabian Post
Dubai Sets Global Benchmark for AI Transparency
Arabian Post Staff -Dubai Dubai has introduced the world's first icon-based system to clearly signal whether content is crafted by humans, artificial intelligence, or a blend of both. Launched by Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, the Human–Machine Collaboration classification marks a shift in content disclosure standards. The initiative requires government entities to adopt the system immediately, marking a drive towards accountability and public trust in an era of rapid AI integration. The HMC framework comprises five primary icons: All Human, Human-Led, Machine-Assisted, Machine-Led, and All Machine, each reflecting increasing levels of machine involvement. Developers can further specify nine functional icons to indicate AI contribution across tasks such as ideation, data analysis, writing, translation, visuals, and design. ADVERTISEMENT The system, developed by the Dubai Future Foundation and endorsed by Sheikh Hamdan in his capacity as Chairman of its Board of Trustees, is compulsory for all Dubai government research and knowledge publications. Media content, academic papers, technical reports, videos, academic journals and other multimedia outputs must now prominently display the appropriate icons. For non-government creators, the icons are voluntary but available for ethical transparency. Sheikh Hamdan said transparency is essential for distinguishing human creativity from machine efficacy. He urged global content creators—researchers, publishers, writers, and designers—to adopt the new classification as a norm. On LinkedIn, he stated: 'Today, we launch the world's first Human–Machine Collaboration Icons…a new global benchmark in the age of AI,' inviting worldwide adoption. The initiative meets growing demands for clarity around AI-generated content in scientific, academic, and creative fields. As AI technologies such as generative models and automation tools proliferate, distinguishing authorship becomes increasingly complex. The HMC system addresses this by offering concise visual indicators of machine involvement throughout a document's lifecycle. Beyond classification, the icons offer practical guidelines. Each icon can appear on the cover, footer, or bibliography of a document, with no numerical thresholds assigned. The nine functional icons enable precise reporting by highlighting stages influenced by AI, such as data collection or translation. The system avoids quantification due to challenges in objectively assessing AI contribution levels. Dubai's icon strategy is modelled on enhancing trust in public knowledge creation. Government entities in Dubai must adopt the icons; private sector use is labelled 'opt-in and voluntary,' encouraging transparency across broader sectors. The icons aim to build credibility in educational materials, annual reports, research briefs, social media content, public-facing campaigns, and design outputs. Industry experts have broadly welcomed the initiative. Fast Company Middle East noted the dual-layer approach offers transparency without excessive complexity, while Economy Middle East reported Sheikh Hamdan's emphasis on the blurred lines between human art and machine output. Gulf News cited the icons as a tool for 'honest self-assessment,' reinforcing accountability among content creators. Academics and publishers are now exploring integration possibilities. The system could become a template for journal submission protocols or university publishing frameworks. Concerns persist about compliance monitoring and the potential for misuse—some question whether creators may understate AI contribution or apply icons inconsistently across formats. Dubai Future Foundation has emphasised that icons are free to use and do not require licensing; they are copyrighted but freely deployable, with no prior permission needed. The foundation's intention is to encourage natural adoption in scholarly work, media, and social channels, promoting a culture of transparency rather than regulatory enforcement. Global observers note that while Dubai is first, other cities and institutions are likely to follow. The HMC icons address growing demand from research communities for AI disclosure standards, amid debates over authorship attribution, peer review confidence, and reproducibility. Dubai's initiative closes a gap in ethical AI practice by establishing a clear visual code for machine involvement. As AI-generated content becomes ubiquitous, its success will depend on global uptake, consistent application, and alignment with existing ethics and publishing standards. In the meantime, Dubai's icons offer a blueprint for transparency, setting a new bar for content creation in the AI era.


Arabian Post
a day ago
- Arabian Post
Anthropic to Deploy Global Data Processing Infrastructure
Anthropic will extend its data processing infrastructure on 19 August 2025 to include servers across the United States, Europe, Asia and Australia. This expansion applies to Anthropic API and Claude for Work customers, aiming to enhance operational reliability through geographic redundancy. The company emphasises that data storage will remain strictly within the US. Customers requiring data processing solely within US-based servers must opt out of international regions by the effective date. Those who have already specified this preference through a custom agreement or order form are unaffected and need not take further action. This strategic shift reflects Anthropic's broader infrastructure evolution. As AI workloads grow, the company is adopting a multi-region architecture to reduce latency, enhance service continuity, and bolster failover capabilities. The decision mirrors industry practices where global coverage mitigates regional outages and performance bottlenecks. ADVERTISEMENT Security standards and infrastructure design are outlined in Anthropic's Trust Center, which details compliance with industry norms and data protection measures. Despite the geographical expansion of processing, the firm underscores that all stored data remains within US jurisdiction, alleviating concerns tied to sovereignty and regulatory oversight. Customers who do not choose the default multi-region processing by mid-August will be transitioned automatically. Anthropic urges affected users to submit their preferences ahead of time to ensure compliance with regulatory or internal policy mandates. Experts noted that the policy update signals a balancing act between performance and privacy. An infrastructure built across multiple regions reduces risk of service disruptions, but also raises questions on data governance. Anthropic's retention of US-only storage and explicit opt-out pathway aims to reconcile these objectives. Anthropic's public filings and product documentation confirm the timeline and scope. The Trust Center highlights end-to-end encryption, stringent access controls, and audit logging across geographies. Internal compliance teams reportedly reviewed the regional rollout to ensure compatibility with data protection laws like GDPR. Industry analysts view the move as a logical progression for AI service providers. As model sizes scale and workloads intensify, single-region dependencies become untenable during outages or maintenance windows. Multi-region architectures offer resilience and enable seamless user experiences across time zones. At the same time, customer demand for data sovereignty remains strong. Since Anthropic explicitly maintains storage in the US and offers selective processing opt-outs, clients bound by financial, legal or governmental constraints can retain compliance without sacrificing service continuity. The executive team at Anthropic, including leadership of enterprise products, reportedly supported the phased rollout. In practice, enterprise and API customers were notified via direct communication with clear opt-out channels and a deadline to lock in processing preferences. The announcement coincides with broader initiatives by Anthropic to enhance global reach. Earlier in 2025, the firm expanded Claude's availability in Europe and launched enterprise editions via AWS Marketplace. It also secured partnerships with major data providers such as S&P Global to enrich Claude's enterprise capabilities using its Model Context Protocol. Anthropic's phased rollout strategy is guided by its history of deploying across diverse regions, ensuring that security controls and compliance protocols are harmonised across geographies. The Trust Center details these measures, including data encryption at rest and in transit, strict key management, and role-based access control audits. Absent further opt-out instructions, customer accounts will be migrated to the new multi-region configuration by 19 August 2025. Anthropic encourages users with policy-driven or legal constraints to declare their preferences promptly.