logo
Dubai plants AI transparency flag

Dubai plants AI transparency flag

Euronews23-07-2025
Dubai has launched the world's first Human-Machine Collaboration Icons - a system that makes visible the invisible, showing exactly how humans and intelligent machines work together in research and content creation.
The initiative comes under the direction of His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, the emirate
'Distinguishing between human creativity and artificial intelligence has become a real challenge in light of today's rapid technological advances,' Sheikh Hamdan said as he approved the system.
'That's why we launched this global classification system. We invite researchers, writers, publishers, designers, and content creators around the world to adopt it, and to use it responsibly, in ways that benefit people'.
For Dubai, this isn't just a policy tweak. It feels like a vision shared with the world. A statement that creativity, transparency and trust still matter in a future being reshaped by AI.
Developed by the Dubai Future Foundation, the system comes alive through five primary icons - from 'All Human,' to 'Human led,' to 'Machine assisted,' to 'Machine led,' and finally 'All Machine'.
Nine more functional icons dig deeper, revealing whether AI stepped in during ideation, data collection, design, writing or translation. Together they act like a set of honest signposts for readers, viewers and decision‑makers trying to understand: how much of this came from a human, and how much from a machine?
It's a deceptively simple idea that feels urgently relevant. In an age of viral deepfakes and generative models, these small symbols could make a huge difference in trust.
A city with a habit of leaping ahead
If this feels ambitious, it's because Dubai thrives on ambition. The emirate has spent years reinventing itself as more than a gleaming skyline or an aviation hub. Now, it wants to be the world's next technological crossroads.
The UAE's AI market, worth €29.7 billion in 2023 is on track to skyrocket to €234 million by 2030. Government‑backed funds are pouring billions into data centres, chip fabrication and sovereign computing.
Partnerships with Microsoft, Nvidia, OpenAI and others are laying down fibre and silicon on a scale few other nations can match.
And it's not growth for growth's sake. 'AI is a fundamental shift in how businesses, governments and individuals relate to data, decisions and automation', said Tarek Kabrit, CEO of Dubai‑based tech firm Seez.
'The real value lies in how AI integrates seamlessly to empower people and create new human‑centric experiences'.
Built on people, not just machines
That human‑centric focus runs through Dubai's AI vision. Over a million people are being trained in AI skills. Universities like the Mohamed bin Zayed University of Artificial Intelligence are drawing talent from across the globe. And the country's AI Ethics Charter and data protection laws are setting guardrails as fast as innovation pushes ahead.
Sheikh Hamdan's call for global adoption of the new icon system is part of that ethos: a future where AI isn't a black box, but a partner you can see, measure and trust.
If this feels familiar, it's because Dubai has done it before. From launching Emirates airline with two leased planes and a dream, to sending the Hope Probe to Mars, the city has turned audacious ideas into benchmarks the rest of the world watches.
Now it's doing the same with AI and with a dose of emotion behind the engineering. The Human-Machine Collaboration Icons are more than just a framework. They're a reminder that in the race to build the future, it's not enough to be fast. You have to be open. You have to be trusted. And you have to bring people along with you.
In Dubai's own words: This is not just about machines creating. It's about humans and machines creating together, and owning that story, proudly, in the open.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

France 24

time3 hours ago

  • France 24

Grok, is that Gaza? AI image checks mislocate news photographs

But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

US government gets a year of ChatGPT Enterprise for $1
US government gets a year of ChatGPT Enterprise for $1

France 24

time4 hours ago

  • France 24

US government gets a year of ChatGPT Enterprise for $1

Federal workers in the executive branch will have access to ChatGPT Enterprise in a partnership with the US General Services Administration, according to the pioneering San Francisco-based artificial intelligence (AI) company. "By giving government employees access to powerful, secure AI tools, we can help them solve problems for more people, faster," OpenAI said in a blog post announcing the alliance. ChatGPT Enterprise does not use business data to train or improve OpenAI models and the same rule will apply to federal use, according to the company. Earlier this year, OpenAI announced an initiative focused on bringing advanced AI tools to US government workers. The news came with word that the US Department of Defense awarded OpenAI a $200 million contract to put generative AI to work for the military. OpenAI planned to show how cutting-edge AI can improve administrative operations, such as how service members get health care, and also has cyber defense applications, the startup said in a post. OpenAI has also launched an initiative to help countries build their own AI infrastructure, with the US government a partner in projects. The tech firm's move to put its technology at the heart of national AI platforms around the world comes as it faces competition from Chinese rival DeepSeek. DeepSeek's success in delivering powerful AI models at a lower cost has rattled Silicon Valley and multiplied calls for US big tech to protect its dominance of the emerging technology. The OpenAI for Countries initiative was launched in June under the auspices of a drive -- dubbed "Stargate" -- announced by US President Donald Trump to invest up to $500 billion in AI infrastructure in the United States. OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers
Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Euronews

time7 hours ago

  • Euronews

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam' centres targeting people online around that world, its parent company Meta said. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams, including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world. Too-good-to-be-true offers and unsolicited messages attempt to steal consumers' information or money, with scams filling our phones, social media, and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centres, which often span from forced labour operated by organised crime – and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, Meta said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps – as well as TikTok, Telegram, and AI-generated messages made using ChatGPT – to offer payments for fake likes, to enlist people into a pyramid scheme, or to lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store