logo
DeepSeek and the Promethean dilemma: The ethics of open-source AI

DeepSeek and the Promethean dilemma: The ethics of open-source AI

Gulf Business13-03-2025

Image: Supplied
A very long time ago, atop the heights of Mount Olympus, a drama unfolded that would shape the human story: Zeus resented how the Titan Prometheus had become attached to humans, so decreed that no human could use fire on earth — a reminder of the gods' ultimate power. Yet Prometheus, defiant, smuggled a spark of divine fire back to humanity.
That spark ignited the rise of civilisations and empires as humans harnessed its potential. Some became so confident in their mastery that they questioned the gods themselves, even believing they were gods. Zeus was furious. Not only had Prometheus stolen from the heavens, but he had upended the natural order of human subservience.
For Prometheus, it didn't end well. Zeus exacted his vengeance, which led to the opening of Pandora's box.
The lesson? Empowering humanity with fire led to extraordinary progress, but humans are nothing if not unpredictable. There are accidents, and there are arsonists.
Open-source artificial intelligence feels much the same: a Promethean spark with immense potential and significant risks.
Open-source AI refers to systems whose components — code, models, and sometimes datasets — are made publicly accessible. This openness allows individuals and organisations to use, study, and modify these AI resources freely. It democratises access to technology, accelerates innovation, and empowers smaller players.
Projects like LlaMa, Mistral, and, more recently,
But just as fire-forged weapons alongside warmth, open-source AI carries ethical dilemmas. Its accessibility — the foundation of its power — can heighten risks if unchecked. With proprietary models, individuals and companies can be held accountable (albeit that is slightly diminished with the repeal of the Trustworthy Development and Use of Artificial Intelligence (EO 14110) Act).
With open source, we rely on a willing community of dispersed individuals to do the right thing. While openness fosters rapid innovation and transparency, it needs tools and assurances to prevent misuse.
DeepSeek's low-cost, open-source AI disrupts the very foundation of the global AI race. Developed for a fraction of the cost of its rivals, its efficiency and openness challenge the assumption that massive resources are prerequisites for cutting-edge technology. Yet with openness comes a lack of control. Once released, models are no longer governed by their creators, leaving accountability elusive when harm occurs. This underscores the urgent need for the global AI community to develop tools — a suite of tests, monitoring systems, and ethical protocols — to ensure that open-source models behave responsibly and resist malicious manipulation.
Inspiration from DeepSeek's example
The UK and Europe, with constrained AI budgets relative to the US and China, can take inspiration from DeepSeek's example. By focusing on efficiency over scale, these nations could embrace open-source frameworks to pool talent and resources, fostering collective advancements rather than isolated efforts.
This approach aligns with the UK's stated commitment to fairness, accountability, and transparency in AI development. Furthermore, the UK's leadership in ethical AI could drive the creation of governance standards that enhance the safety and reliability of open-source models without stifling their potential.
History offers parallels. Open-source software, from Linux to decentralised cryptocurrencies, demonstrates how collective innovation can accelerate progress. But freedom without governance often invites chaos. Bitcoin democratised financial transactions, but it also fueled ransomware attacks and unregulated markets. In AI, the stakes are higher still.
A safety net is key
Artificial intelligence's borderless nature accelerates innovation but complicates governance. A safety net is needed that ensures innovation does not outpace responsibility. This is where the AI community must come together to create tools that govern open-source models effectively. Navigating these challenges demands balance.
Developers must embed safeguards into their models, such as fine-grained permissions, ethical guidelines, and robust monitoring mechanisms. Initiatives like the Global Partnership on AI (GPAI) offer a collaborative platform to monitor developments and respond to risks.
Prometheus gave humanity fire, but he did so without a plan for its use. Open-source AI could rapidly bring transformational progress. But we must think of the consequences and prepare accordingly — something Prometheus, for all his brilliance, did not.
The

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nvidia aims to build industrial AI platform in Germany
Nvidia aims to build industrial AI platform in Germany

Tahawul Tech

time4 days ago

  • Tahawul Tech

Nvidia aims to build industrial AI platform in Germany

At the recent VivaTech conference in Paris, Nvidia CEO Jensen Huang announced that the company would build its first artificial intelligence cloud platform for industrial applications in Germany. The technology aims to help carmakers such as BMW and Mercedes-Benz with processes from simulating product design to managing logistics. 'In just two years, we will increase the amount of AI computing capacity in Europe by a factor of 10,' Huang said. 'Europe has now awakened to the importance of AI factories and the importance of the AI infrastructure,' he said, laying out plans for 20 AI factories – large-scale infrastructure designed for developing, training and deploying AI models – in Europe. While Europe has lagged behind the U.S. and China in developing AI technologies, the European Commission said in March that it planned to invest $20 billion to construct four AI factories. Nvidia is also partnering European AI champion Mistral to create AI computing that runs on 18,000 of the latest Nvidia chips for European businesses. 'Sovereign AI is an imperative – no company, industry or nation can outsource its intelligence', Huang said. Huang has been travelling the globe to highlight the importance of businesses adopting AI and the dangers of falling behind. Source: Reuters Image Credit: Stock Image

Mistral launches Europe's first AI reasoning model
Mistral launches Europe's first AI reasoning model

Tahawul Tech

time5 days ago

  • Tahawul Tech

Mistral launches Europe's first AI reasoning model

Mistral recently launched Europe's first AI reasoning model as it tries to keep pace with American and Chinese rivals within the sphere of AI development. The French startup has attempted to differentiate itself by championing its European roots as well as making some of its models open source in contrast to other proprietary offerings. Mistral is considered Europe's best shot at having a home-grown AI competitor, but has lagged behind in terms of market share and revenue. Reasoning models use chain-of-thought techniques – a process that generates answers with intermediate reasoning abilities when solving complex problems. They could also be a promising path forward in advancing AI's capabilities as the traditional approach of building ever-bigger large language models by adding more data and computing power begins to hit limitations. For Mistral, which was valued by venture capitalists at $6.2 billion, an industry shift away from 'scaling up' could give it a window to catch up against better capitalised rivals. Mistral is launching an open-sourced Magistral Small model and a more powerful version called Magistral Medium for business customers. 'The best human thinking isn't linear – it weaves through logic, insight, uncertainty, and discovery. Reasoning language models have enabled us to augment and delegate complex thinking and deep understanding to AI,' Mistral said. American companies have mostly kept their most advanced models proprietary, though a handful, such as Meta, has released open-source models. In contrast, Chinese firms such as DeepSeek have taken the open-source path to demonstrate their technological capabilities. Source: Reuters Image Credit: Dado Ruvic

France's Mistral unveils its first 'reasoning' AI model
France's Mistral unveils its first 'reasoning' AI model

Al Etihad

time6 days ago

  • Al Etihad

France's Mistral unveils its first 'reasoning' AI model

10 June 2025 23:31 PARIS (AFP)French artificial intelligence startup Mistral on Tuesday announced a so-called "reasoning" model it said was capable of working through complex problems, following in the footsteps of top US immediately on the company's platforms as well as the AI platform Hugging Face, the Magistral "is designed to think things through -- in ways familiar to us," Mistral said in a blog AI was designed for "general purpose use requiring longer thought processing and better accuracy" than its previous generations of large language models (LLMs), the company other "reasoning" models, Magistral displays a so-called "chain of thought" that purports to show how the system is approaching a problem given to it in natural means users in fields like law, finance, healthcare and government would receive "traceable reasoning that meets compliance requirements" as "every conclusion can be traced back through its logical steps", Mistral company's claim gestures towards the challenge of so-called "interpretability" -- working out how AI systems arrive at a given they are "trained" on gigantic corpuses of data rather than directly programmed by humans, much behaviour by AI systems remains impenetrable even to their also vaunted improved performance in software coding and creative writing by "reasoning" models include OpenAI's o3, some versions of Google's Gemini and Anthropic's Claude, or Chinese challenger DeepSeek's idea that AIs can "reason" was called into question this week by Apple -- the tech giant that has struggled to match achievements by leaders in the field. Several Apple researchers published a paper called "The Illusion of Thinking" that claimed to find "fundamental limitations in current models" which "fail to develop generalizable reasoning capabilities beyond certain complexity thresholds".

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store