
Don't be a Luddite, embrace artificial intelligence
https://arab.news/6nt8m
The 20th-century British science fiction writer Arthur C. Clarke famously observed that any sufficiently advanced technology was indistinguishable from magic.
Clarke spent much of his life foretelling, with unerring accuracy, the nature of the world in which we now live. In 1945, for example, he proposed a system of satellites in geostationary orbits ringing the Earth, upon which we now rely for communication and navigation. In 1964, he suggested that the workers of the future 'will not commute ... they will communicate.' Sound familiar? And again in 1964, Clarke predicted that, in the world of the future, 'the most intelligent inhabitants ... won't be men or monkeys, they'll be machines, the remote descendants of today's computers. Now, the present-day electronic brains are complete morons. But this will not be true in another generation. They will start to think, and eventually they will completely outthink their makers.'
It is the accuracy of that last prediction — what Clarke called 'machine learning,' now usually referred to as artificial intelligence — that most exercises those who feel threatened by it. It would be fair to say that AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed.
There are two main areas of concern, the first of which may be summarized as: 'AI will eventually kill us all.' This may seem far-fetched, but the thought process that leads to the doomsday conclusion is not without logic. Broadly, it is that a superior intelligence must eventually reach the inevitable conclusion that humanity is an inferior species, destroying the planet on which it relies for its very existence, and should therefore be eliminated for the protection of everything else. Elon Musk worked this out a long time ago. Why do you think he wants to go to Mars?
Fortunately, humanity is not reliant on Musk for its survival: for that we must thank another great exponent of the science fiction genre, Isaac Asimov. In 1942, he formulated the Three Laws of Robotics, which broadly regulate the relationship between us and machines, and in 1986 he added another law to precede the first three. It states: 'A robot may not injure humanity or, through inaction, allow humanity to come to harm.'
Asimov's laws apply to fictional machines, of course, but they still influence the ethics that underpin the creation and programming of all artificial intelligence. So, on the whole, I think we are safe.
AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed
The second area of concern may be broadly summarized as: 'AI is coming for all our jobs.' While this one may have more traction, it is not a new fear and it predates AI by centuries. It is not difficult to imagine the inventor of the wheel, showing off his creation but being greeted with skepticism by his Neolithic friends: 'No good will come of this. Our legs will become redundant, and those of future generations will wither away and die. This contraption must be destroyed.'
Before the first Industrial Revolution in the 18th and 19th centuries, most people in Europe and North America lived in agrarian communities and worked by hand. The advent of the water mill and the steam engine threw many out of work, as traditional crafts such as spinning and weaving cotton became redundant. However, jobs that had not previously existed were created for boiler makers, ironsmiths and mechanics.
It happened again in the late 19th century, when steam power was superseded by electricity and steam mechanics retrained to become electricians. And again in the 1980s, with the advent of the computer age and the end of repetitive manual tasks, but the creation of new jobs for hardware and software engineers.
Will AI have the same net beneficial effect? There is evidence that it already is. In the UK last week, health chiefs began screening 700,000 women for signs of breast cancer, using AI that can detect changes in breast tissue in a mammogram that even an expert radiologist would miss. In addition, the technology allows screening with only one human specialist instead of the usual two, releasing hundreds of radiologists for other vital work. This AI will save lives.
However, when one door opens, another closes. Also last week, the Authors Guild, the US body that represents writers, created a logo for books to show readers that a work 'emanates from human intellect' and not from artificial intelligence.
Authors argue that AI work has no merit, since it merely copies words and phrases that have already been used by another writer
You can understand their angst. Large language models, the version of AI that is the authors' target, create the databases from which they produce content by scraping online sources for every word ever published, mostly without the formality of bothering to pay the original author. Many journalists have the same complaint. Some major media outlets — including the Associated Press, Axel Springer, the Financial Times, News Corp and The Atlantic — have reached licensing agreements with AI creators. Others, notably The New York Times, have gone down the lawsuit route for breach of copyright.
Perhaps, especially for authors, this is a can of worms best left unopened. It used to be said that a monkey sitting at a keyboard typing at random for an infinite amount of time would eventually produce the complete works of Shakespeare. Mathematicians dispute this, but there is no disputing that AI has made it more likely. For example, if you were to ask a large language model such as ChatGPT to write a 27,000-word story in the style of Ernest Hemingway about an elderly fisherman and his long struggle to catch a giant marlin, it would almost certainly come up with 'The Old Man and the Sea' — especially since the original is already in the AI's database.
Authors argue that the AI work would have no merit, since it merely copies words and phrases that have already been used by another writer. But does that argument not apply to every new literary work? With the exception of Shakespeare, who coined about 1,700 written neologisms — from 'accommodation' to 'suspicious' — among a total of about 20,000 words in his plays and poems, almost every writer uses words and phrases that have been used by others before them: any literary or artistic merit derives from how a writer deploys those words and phrases. But if a book needs a special logo to distinguish a human author from an AI, what is the point in making the distinction?
In England in the early 19th century, gangs of men called Luddites — after Ned Ludd, a weaver who lost his traditional manual job to mechanization — roamed towns and cities smashing the new machines in the textile industry that they believed were depriving them of employment. They initially enjoyed widespread support, but this melted away when it became clear that the age of steam was creating more jobs than it destroyed. Let that be a lesson for the anti-AI Luddites of the 21st century.
- Ross Anderson is associate editor of Arab News.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Asharq Al-Awsat
9 hours ago
- Asharq Al-Awsat
Anthropic Says Looking to Power European Tech with Hiring Push
American AI giant Anthropic aims to boost the European tech ecosystem as it expands on the continent, product chief Mike Krieger told AFP Thursday at the Vivatech trade fair in Paris. The OpenAI competitor wants to be "the engine behind some of the largest startups of tomorrow... (and) many of them can and should come from Europe", Krieger said. Tech industry and political leaders have often lamented Europe's failure to capitalize on its research and education strength to build heavyweight local companies -- with many young founders instead leaving to set up shop across the Atlantic. Krieger's praise for the region's "really strong talent pipeline" chimed with an air of continental tech optimism at Vivatech. French AI startup Mistral on Wednesday announced a multibillion-dollar tie-up to bring high-powered computing resources from chip behemoth Nvidia to the region. The semiconductor firm will "increase the amount of AI computing capacity in Europe by a factor of 10" within two years, Nvidia boss Jensen Huang told an audience at the southern Paris convention center. Among 100 planned continental hires, Anthropic is building up its technical and research strength in Europe, where it has offices in Dublin and non-EU capital London, Krieger said. Beyond the startups he hopes to boost, many long-standing European companies "have a really strong appetite for transforming themselves with AI", he added, citing luxury giant LVMH, which had a large footprint at Vivatech. 'Safe by design' Mistral -- founded only in 2023 and far smaller than American industry leaders like OpenAI and Anthropic -- is nevertheless "definitely in the conversation" in the industry, Krieger said. The French firm recently followed in the footsteps of the US companies by releasing a so-called "reasoning" model able to take on more complex tasks. "I talk to customers all the time that are maybe using (Anthropic's AI) Claude for some of the long-horizon agentic tasks, but then they've also fine-tuned Mistral for one of their data processing tasks, and I think they can co-exist in that way," Krieger said. So-called "agentic" AI models -- including the most recent versions of Claude -- work as autonomous or semi-autonomous agents that are able to do work over longer horizons with less human supervision, including by interacting with tools like web browsers and email. Capabilities displayed by the latest releases have raised fears among some researchers, such as University of Montreal professor and "AI godfather" Yoshua Bengio, that independently acting AI could soon pose a risk to humanity. Bengio last week launched a non-profit, LawZero, to develop "safe-by-design" AI -- originally a key founding promise of OpenAI and Anthropic. 'Very specific genius' "A huge part of why I joined Anthropic was because of how seriously they were taking that question" of AI safety, said Krieger, a Brazilian software engineer who co-founded Instagram, which he left in 2018. Anthropic is still working on measures designed to restrict their AI models' potential to do harm, he added. But it has yet to release details of its "level 4" AI safety protections foreseen for still more powerful models, after activating ASL (AI Safety Level) 3 to corral the capabilities of May's Claude Opus 4 release. Developing ASL 4 is "an active part of the work of the company", Krieger said, without giving a potential release date. With Claude 4 Opus, "we've deployed the mitigations kind of proactively... safe doesn't have to mean slow, but it does mean having to be thoughtful and proactive ahead of time" to make sure safety protections don't impair performance, he added. Looking to upcoming releases from Anthropic, Krieger said the company's models were on track to match chief executive Dario Amodei's prediction that Anthropic would offer customers access to a "country of geniuses in a data center" by 2026 or 2027 -- within limits. Anthropic's latest AI models are "genius-level at some very specific things", he said. "In the coming year... it will continue to spike in particular aspects of things, and still need a lot of human-in-the-loop coordination," he forecast.


Arab News
a day ago
- Arab News
How emerging AI talent is shaping the future of smart healthcare in Saudi Arabia
RIYADH: As Saudi Arabia accelerates its investment in AI-powered healthcare, two young researchers from the Mohamed bin Zayed University of Artificial Intelligence are building the very tools that hospitals in the Kingdom will soon need — intelligent, interpretable, and scalable systems for diagnosis and prognosis. Although the university's 2025 cohort did not include Saudi nationals this year, the work of two standout graduates, Mohammed Firdaus Ridzuan and Tooba Tehreem Sheikh, directly aligns with Saudi Arabia's healthcare transformation plans under Vision 2030. Their research offers practical, forward-looking solutions for the Kingdom's next generation of smart hospitals. At a time when AI systems are being deployed across diagnostic units in Saudi hospitals, from the King Faisal Specialist Hospital to new initiatives backed by the Saudi Data and AI Authority, the focus is shifting from capability to clarity. Can the systems provide real-time support? Can they explain their reasoning? Can doctors intervene? These are the questions both Ridzuan and Sheikh have set out to answer. Ridzuan, a PhD graduate in machine learning, developed Human-in-the-Loop for Prognosis, or HuLP for short — a cancer survival prediction system that places doctors back at the center of AI-powered decision-making. 'While AI has made significant strides in diagnosing diseases, predicting individual survival outcomes, especially in cancer, is still a challenging task,' Ridzuan told Arab News. 'Our model addresses this by enabling real-time clinician intervention.' Unlike traditional models that operate in isolation, HuLP is built for collaboration. Medical professionals can adjust and refine its predictions using their clinical expertise. These adjustments are not just temporary; they influence how the model evolves. 'Doctors and medical professionals can actively engage with the system,' Ridzuan said. 'Their insights don't just influence the result — they actually help the model learn.' This approach to human-AI partnership ensures that predictions remain explainable, context-aware, and grounded in patient-specific realities, a key need for Saudi hospitals integrating AI at scale. 'By allowing clinicians to dynamically adjust predictions, we create a more adaptive and responsive system that can handle local challenges,' Ridzuan added. The Kingdom's healthcare institutions are undergoing a digital transformation driven by national entities like SDAIA, the Ministry of Health, and the Center for Artificial Intelligence in Medicine and Innovation. These entities are focused not only on adopting new AI tools but also on ensuring that these systems can integrate into clinical workflows. This is where Ridzuan sees HuLP making an impact. 'Smart hospitals are already integrating AI diagnostic tools for medical imaging and patient data analysis,' he said. 'Our model can take this to the next level by empowering clinicians to interact with and guide the system's predictions.' In settings where trust and transparency are vital, Ridzuan's collaborative model could help hospitals overcome one of AI's most persistent problems: the black box effect. This refers to the opaque nature of certain systems, particularly in the field of AI, where the internal workings and decision-making processes are hidden or unknown. The emphasis on local relevance also comes through in HuLP's design. Ridzuan says real-time data from regional healthcare systems is essential for training accurate, context-sensitive models. 'Local data provides insights into the unique health conditions and medical practices within the Gulf region,' he said. 'Integrating this data ensures that the AI is attuned to the specific needs and health profiles of patients in the region.' The system is built to learn continuously. As clinicians correct or refine its predictions, the model updates itself, improving with each interaction. This feedback loop is crucial for real-world deployments, especially in the Gulf, where data quality can be inconsistent. While Ridzuan is focused on outcomes, Sheikh, an MSc graduate in computer vision, is transforming the way hospitals detect disease in the first place. Her project, Med-YOLOWorld, is a next-generation imaging system that can read nine types of medical scans in real time. Unlike traditional radiology AI tools, which are often limited to specific tasks, Med-YOLOWorld operates with open-vocabulary detection. That means it can identify anomalies and organ structures that it has not been explicitly trained on — a key feature for scalability. 'Most models are confined to a single modality like CT or X-ray,' Sheikh told Arab News. 'Med-YOLOWorld supports nine diverse imaging types, including ultrasound, dermoscopy, microscopy, and histopathology.' With support for up to 70 frames per second, the system is designed for clinical deployment in high-demand environments. Sheikh sees clear potential for its use in Saudi Arabia, where institutions like the King Faisal Specialist Hospital and Research Centre are already implementing multi-modal AI imaging tools. 'It can seamlessly integrate with existing imaging systems to enable open-vocabulary detection,' she said. 'Identifying a wide range of medical findings — even those outside its original training set — is essential for fast-paced clinical environments.' But building a universal imaging tool came with its own technical hurdles. 'The biggest challenge was managing the diverse preprocessing requirements across imaging modalities,' Sheikh said. 'CT and MRI scans need intensity normalization, while ultrasound, dermoscopy, and microscopy have completely different visual characteristics.' Data imbalance was another issue. While MRI and CT scans are widely available, data for more niche imaging types is scarce. Sheikh tackled this by designing custom augmentation techniques to ensure the model performs consistently across all modalities. She is now working on combining Med-YOLOWorld with vision-language models, systems that explain what they see in natural language. 'MiniGPT-Med does a great job at explaining radiology images,' she said. 'But pairing it with a system like Med-YOLOWorld adds a crucial dimension — open-world localization. Not just describing the issue but pointing to it.' This fusion could create a powerful end-to-end diagnostic pipeline: detect, explain, and localize. For Saudi hospitals embracing AI-driven imaging, the impact could be transformative. For Sheikh, the global implications of her work are just as important as the technical achievements. 'Med-YOLOWorld reduces the need for large, annotated datasets,' she said. 'In fast-scaling healthcare systems, that's a game-changer.' By enabling the detection of unseen categories, the system can remain relevant even as new diseases or anomalies emerge. And when combined with language models, it can assist in medical training, annotations, and decision support, all while reducing dependence on expert-labeled data. This approach could accelerate AI adoption in emerging regions, including across the Gulf and the wider Middle East and North Africa, where access to large datasets and AI-specialized radiologists remains limited. While MBZUAI is based in the UAE, its alumni are playing a growing role in shaping AI initiatives that extend across the Gulf. Both Ridzuan and Sheikh have demonstrated how innovation, when aligned with clinical realities and regional goals, can scale far beyond the lab. As Saudi Arabia continues to invest in smart hospitals, real-time imaging, and personalized care, tools like HuLP and Med-YOLOWorld represent the next wave of AI in healthcare: explainable, collaborative, and regionally adaptable. And with growing partnerships between research institutions, healthcare providers, and government entities, these systems may not be far from deployment in the Kingdom, paving the way for a more intelligent, human-centered approach to medical care.


Al Arabiya
3 days ago
- Al Arabiya
Gabbard says AI is speeding up intel work, including the release of the JFK assassination
Artificial intelligence is speeding up the work of America's intelligence services, Director of National Intelligence Tulsi Gabbard said Tuesday. Speaking to a technology conference, Gabbard said AI programs, when used responsibly, can save money and free up intelligence officers to focus on gathering and analyzing information. The sometimes slow pace of intelligence work frustrated her as a member of Congress, Gabbard said, and continues to be a challenge. AI can run human resource programs, for instance, or scan sensitive documents ahead of potential declassification, Gabbard said. Her office has released tens of thousands of pages of material related to the assassinations of President John F. Kennedy and his brother, New York Sen. Robert F. Kennedy, on the orders of President Donald Trump. Experts had predicted the process could take many months or even years, but AI accelerated the work by scanning the documents to see if they contained any material that should remain classified, Gabbard said during her remarks at the Amazon Web Services Summit in Washington. 'We have been able to do that through the use of AI tools far more quickly than what was done previously — which was to have humans go through and look at every single one of these pages,' Gabbard said. The intelligence community already relies on many private-sector technologies, and Gabbard said she wants to expand that relationship instead of using federal resources to create expensive alternatives. 'How do we look at the available tools that exist — largely in the private sector — to make it so that our intelligence professionals, both collectors and analysts, are able to focus their time and energy on the things that only they can do," she said. Gabbard, who coordinates the work of 18 intelligence agencies, has vowed to shake up America's spy services. Since assuming her role this year, she has created a new task force to consider changes to agency operations as well as greater declassification. She also has fired two veteran intelligence officers because of perceived opposition to Trump, eliminated diversity, equity and inclusion programs and relocated the staff who prepare the President's Daily Brief to give her more direct control.