logo
Zayed University joins Digital Education Council

Zayed University joins Digital Education Council

Al Etihad7 hours ago

30 June 2025 13:51
ABU DHABI (WAM)Zayed University (ZU) has joined more than 90 leading institutions worldwide as a member of the Digital Education Council (DEC), a global community dedicated to advancing artificial intelligence (AI) literacy, responsible digital transformation, and innovation in education.ZU is the first university from the UAE to join the DEC, marking a significant milestone in the universities strategic vision to equip students, faculty, and leadership with the tools, mindset, and capabilities needed to thrive in an increasingly digital world.The membership builds on the broader efforts to integrate AI cross the university, including ongoing faculty development, digital pedagogy, and curriculum innovation aligned with the future of work.'Integrating artificial intelligence across our work is vital to building digital fluency at Zayed University,' said Professor Michael Allen, Acting Vice President of Zayed University. 'Joining the DEC allows us to both contribute to and benefit from a global network of education leaders. But ultimately, the real impact lies in how we bring those insights to life - in our classrooms, in our programmes, and in how we prepare students for the world ahead," he added.Starting this summer, ZU will also roll out two key DEC initiatives: the Certificate in AI for Higher Education, designed for faculty and leadership, and the AI Literacy for Students programme.Alongside the new DEC membership, ZU's College of Technological Innovation (CTI) will launch a new Bachelor of Science in Intelligent Systems Engineering this fall. The programme will prepare a new generation of engineers to design, build, and manage intelligent systems powered by AI and emerging technologies.CTI is also introducing two new Master's programmes in Cybersecurity and Digital Transformation and Innovation, responding to growing national and global demand for advanced digital skills and specialised expertise.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

India Leads World in Generative AI Take‑Up
India Leads World in Generative AI Take‑Up

Arabian Post

time2 hours ago

  • Arabian Post

India Leads World in Generative AI Take‑Up

Over ninety per cent of employees in India use generative artificial intelligence tools at work, with 92 per cent logging daily use, according to a recent report by the Boston Consulting Group. This figure places India well above the global average of about 72 per cent. The BCG study, based on a survey of 10,600 workers across 11 countries, highlights India's prominence in integrating generative AI. Alongside this, some 17 per cent of workers report that their organisations have embedded AI agents into daily workflows, ranking India among the top three nations globally for such integration. High adoption has come with heightened concern. Nearly half of Indian employees—48 per cent—believe their roles are at risk of disappearing within the next decade due to AI, outpacing global levels at 41 per cent. Anxiety is compounded by low levels of understanding and guidance: only 33 per cent say they comprehend how AI agents function, while just 36 per cent feel they have received adequate training. ADVERTISEMENT Despite these concerns, AI is delivering tangible productivity benefits. Almost half of Indian users report saving more than an hour per day through AI assistance, yet only one‑third receive support in leveraging that time for strategic tasks. Workflow redesign is emergent as a key differentiator: companies that pivot beyond tool deployment to reengineer tasks, offer structured training, and secure leadership backing are achieving stronger outcomes. Experts cite several critical enablers for successful AI adoption. In‑person upskilling, access to approved AI platforms, and visible executive endorsement dramatically enhance uptake. In fact, where frontline workers report robust leadership support, regular usage jumps from 41 per cent to 82 per cent. Security and governance issues remain pressing. About 46 per cent of workers worry that AI decisions lack sufficient human oversight, 35 per cent fear bias or unfairness, and 32 per cent question accountability for errors. Parallel research highlights that 92 per cent of executives flag security vulnerabilities—ranging from cyber‑attacks to data privacy—as major hurdles in AI implementation. India's trajectory is supported by robust public and private investment. The UN Trade and Development's 2025 Technology and Innovation Report names India tenth globally in private‑sector AI investment. Infrastructure initiatives, such as the IndiaAI Mission's goal to build one of the world's largest AI compute networks by 2027, are bolstered by efforts from academia and industry. Centres of excellence at institutions like IIT Delhi and IIIT Hyderabad, alongside corporate alliances, are driving innovation and applied AI solutions. AI's impact is felt across sectors. In public services, digital infrastructure and chatbots are enhancing citizen access. In agriculture, finance and healthcare, predictive analytics and generative AI are reshaping service delivery. Private‑sector growth projections suggest India's AI services market could reach up to US $17 billion by 2027. Nonetheless, workforce readiness remains uneven. While 74 per cent of participants in a Microsoft‑sponsored skills programme hailed from smaller towns—and 65 per cent were women—skilling delivery is uneven, with many employees still left to self‑learn or rely on unauthorised tools. For companies seeking competitive edge, the insight is clear: widespread tool usage alone does not guarantee impact. Only by pairing AI with thoughtful workflow redesign, ethical governance and targeted training can businesses capture the full value of generative intelligence.

Business-critical mails in spam folders: Why real emails look fake now
Business-critical mails in spam folders: Why real emails look fake now

Zawya

time2 hours ago

  • Zawya

Business-critical mails in spam folders: Why real emails look fake now

In the fight against phishing, forward-thinking organisations are winning. But there's a twist. The heightened vigilance that has empowered employees to detect suspicious emails is now creating a new dilemma: legitimate, business-critical messages are being flagged, ignored, or buried in spam folders. And in today's AI-fuelled cyber landscape, that reaction may be as justified as it is damaging. Phishing works and it's reshaping trust The release of generative AI tools has supercharged phishing attempts. KnowBe4's Phishing Threat Trend Report 2025 ( shows that more than 80% ( of the analysed phishing emails were augmented by AI, and they're far more convincing than before. 'The gut-check we used to rely on has been gamed – and even the large language models now being explored to help detect suspicious emails are also struggling,' says Anna Collard, SVP of Content Strategy&Evangelist at KnowBe4 Africa. 'They're forced to dig deeper, assessing tone, context, and subtler red flags.' The result? Suspicion is now the default And it's not unwarranted. Maturing cybersecurity awareness and phishing simulation programs have helped sharpen employees' scepticism ( But this success has revealed a new problem: overcorrection. Emails that are real – from HR, IT, legal, or sales – are now increasingly being misjudged. In some cases, they're wrongly flagged as phishing by either people or systems. In others, they're simply ignored. The irony is that some of the most common and legitimate corporate communication traits are now the very ones that raise red flags: Urgency: 'Sign this by COB today'; or when every email from a colleague is marked 'urgent' Unexpected senders: e.g. HR tools or SaaS platforms Calls to action: 'Click here to confirm' Stylistic quirks: overly polished copy, too many links or bold phrases Tech misalignments: emails from legitimate senders failing DMARC or DKIM checks 'Even just using a third-party sender domain can cause confusion,' says Collard. 'If staff don't expect it – or don't recognise the platform – the message can get flagged.' For good reason too, as according to KnowBe4's Phishing Threat Trend Report ( the top 5 legitimate platforms used to send out phishing emails include popular business tools such as DocuSign, Paypal, Microsoft, Google Drive, and Salesforce. The cost of false positives When real emails get sidelined, the impact is more than a missed message. Delayed IT updates, ignored HR deadlines, and lost sales opportunities can create serious ripple effects across operations. Deliverability issues also erode trust. And in high-stakes environments like healthcare, legal services or finance, false positives can become costly very quickly. So, how do you write emails that get read – not flagged? To combat this growing challenge, organisations need to stop thinking of phishing risk as purely a recipient problem. Legitimate internal emails need to look legitimate too. Here's how every team – from HR to IT to marketing – can write more trustworthy emails: Write Like a Human, Deliver Like a Pro Subject lines should set expectations Use clear, predictable language. Instead of 'IMPORTANT: Read this now!', try 'Reminder: Benefits enrollment closes Friday'. Lead with context before asking for action Start with a reference point: 'You recently submitted a travel claim...' or 'As part of your onboarding...'. Limit urgency to what's truly urgent Too many 'ASAP's will breed indifference. Use urgency sparingly – and explain why it matters. Remember: If everything is urgent; nothing is. Minimise links and avoid vague CTAs Avoid phrases like 'click here' or hyperlinking whole sentences. Provide a fallback path: 'Or log into your dashboard directly ( Be cautious with tone and formatting Avoid shouty subject lines, gimmicky language, or inconsistent formatting that can trigger filters. Test before sending Run your email through spam-filter testing tools to see what might flag it ( or Get your digital paperwork in order Even the best-written email may never reach its recipient if your authentication protocols aren't properly configured. SPF, DKIM, and DMARC are three essential technical settings that help prove your email really came from your domain. SPF tells email providers which servers are allowed to send emails using your domain name — helping stop spammers from pretending to be you. DKIM adds a digital signature to your emails to prove they really came from you and weren't changed along the way. DMARC brings SPF and DKIM together by setting rules for what to do with suspicious emails (like send them to spam or block them) and sends reports to your IT team so they can spot abuse. 'These protocols are a bit like a digital passport,' Collard explains. 'Without them, even a genuine email may not make it through.' But even technically sound emails can fall flat if they don't look legitimate to the reader. That's why it's just as important to consider how your internal teams craft and send messages. Internal brand security: don't just train recipients – train senders too Cyber awareness is often focused on detection. But to maintain deliverability and trust, sender behaviour matters too. Teach teams to avoid accidental red flags. Share templates and subject line guides. And ensure that employees – especially those sending to large groups – understand the basics of trustworthy communication. Consistency is key. Make sure communications come from the same official addresses, follow familiar formats, and maintain a recognizable tone. This teaches recipients what to expect – and what to be cautious of – building a clearer line between legitimate messages and possible fakes. 'This is part of internal brand hygiene,' says Collard. 'When your team consistently communicates clearly and predictably, you build trust over time – with both employees and clients. That trust makes your emails easier to recognise, safer to deliver, and more likely to be opened.' In a world where AI can impersonate your tone and template with ease ( your best defence is to sound like yourself – and help others know what to expect when you speak. Distributed by APO Group on behalf of KnowBe4. Contact details: Anne Dolinschek KnowBe4 Email: anned@ TJ Coenraad Red Ribbon Email: tj@

DFSA report flags mounting risks from AI, quantum computing
DFSA report flags mounting risks from AI, quantum computing

Khaleej Times

time4 hours ago

  • Khaleej Times

DFSA report flags mounting risks from AI, quantum computing

The Dubai Financial Services Authority (DFSA), the independent regulator of financial services conducted in or from the DIFC, has sounded a clear warning about the rising convergence of cyber risks, artificial intelligence (AI), and quantum computing in a new report that outlines the future of digital regulation for global financial systems. 'Cyber and artificial intelligence risk in financial services: Strengthening oversight through international dialogue,' released on June 30, highlights how emerging technologies are reshaping both the opportunity landscape and the threat environment across financial services. Published after the DFSA's inaugural Cyber and AI Risk Regulatory College — attended by 70 senior officials from 18 regulatory authorities worldwide — the report captures regulatory consensus on the accelerating pace of digital transformation and the urgent need for coordinated global oversight. The report underscored the mounting complexity of cyber threats, the disruptive potential of AI, and the systemic vulnerabilities posed by quantum computing. As the lines between operational resilience, cybersecurity, and technological innovation continue to blur, the DFSA's message is clear: regulatory frameworks must evolve swiftly and collaboratively to protect the integrity of global financial systems in a world of accelerated digital disruption. 'Digital risks are no longer peripheral – they are fast becoming systemic,' said Justin Baldacchino, managing director of Supervision at DFSA. 'This report reflects a growing supervisory consensus on where these risks are converging and how regulatory approaches are evolving.' Among the report's key findings is the rising frequency and sophistication of cyberattacks, many of which now involve 'Living Off the Land' tactics — where attackers misuse legitimate tools already present in systems to evade detection. The reliance on shared digital infrastructure, such as cloud services and third-party platforms, has further amplified vulnerability. A single-point failure within a critical provider — be it a cloud operator, payment processor, or managed services firm — could lead to widespread disruption, according to the DFSA. Supply chain attacks also feature prominently in the risk narrative. Financial institutions face threats from compromised credentials, outdated or unpatched software, and malicious updates within partner ecosystems. The proliferation of Internet of Things (IoT) devices and edge technologies — often with weak security governance — has added new threat vectors to an already complex cybersecurity landscape. The report also explores how cloud adoption, while enhancing resilience and scalability, introduces its own set of challenges. Cloud platforms enable faster deployment of AI solutions, but they raise critical concerns around data privacy, jurisdictional control, and vendor dependence. When sensitive financial data is processed or stored across borders, regulatory compliance and data sovereignty become difficult to manage. Herman Schueller, director of Innovation & Technology Risk Supervision at DFSA, emphasised the need for cross-border regulatory collaboration: 'As innovation accelerates, financial regulators globally are actively examining how best to adapt oversight practices. This report reflects the value of open, international dialogue in building mutual understanding of the regulatory, technical, and operational dimensions of digital risks.' One of the most striking themes of the report is the looming risk of quantum computing. Though still in the early stages of development, quantum computers have the potential to break existing cryptographic systems that underpin global financial security. The DFSA report urges early, coordinated planning to prepare for the transition to post-quantum cryptography, warning that institutions must not wait until quantum capability is commercially viable to act. AI-driven threats are another focal point. Malicious actors are now using AI to automate attacks, bypass defences, and even create synthetic media such as deepfakes and voice clones that can deceive users and systems. These AI-powered tools can detect vulnerabilities, launch attacks at scale, and operate autonomously. The report calls for stronger explainability frameworks, third-party risk assessments, and robust governance to manage the growing reliance on AI across financial services. The DFSA also notes that rising geopolitical tensions are compounding digital risks. State-sponsored cyber operations and Advanced Persistent Threats are becoming more frequent and targeted, often remaining undetected for long periods. As global financial institutions operate across jurisdictions with varying regulatory maturity, they face fragmented compliance burdens and heightened exposure to politically motivated cyber threats. The DFSA report contributes to the regulator's broader commitment to proactive, principle-based supervision within the Dubai International Financial Centre (DIFC). Through ongoing initiatives like the DFSA Threat Intelligence Platform and work on AI governance, the authority is reinforcing its role as a thought leader in managing digital-era financial risks.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store