
In Canada Lake, Robot Learns to Mine without Disrupting Marine Life
Three robotic arms extended under the water in a Canadian lake, delicately selecting pebbles from the bed, before storing them back inside the machine.
The exercise was part of a series of tests the robot was undergoing before planned deployment in the ocean, where its operators hope the machine can transform the search for the world's most sought-after metals.
The robot was made by Impossible Metals, a company founded in California in 2020, which says it is trying to develop technology that allows the seabed to be harvested with limited ecological disruption.
Conventional underwater harvesting involves scooping up huge amounts of material in search of potato-sized things called poly-metallic nodules.
These nodules contain nickel, copper, cobalt, or other metals needed for electric vehicle batteries, among other key products.
Impossible Metals' co-founder Jason Gillham told AFP his company's robot looks for the nodules "in a selective way."
The prototype, being tested in the province of Ontario, remains stationary in the water, hovering over the lake bottom.
In a lab, company staff monitor the yellow robot on screens, using what looks like a video game console to direct its movements.
Using lights, cameras and artificial intelligence, the robot tries to identify the sought-after nodules while leaving aquatic life, such as octopuses' eggs, coral, or sponges, undisturbed.
- 'A bit like bulldozers' -
In a first for the nascent sector, Impossible Metals has requested a permit from US President Donald Trump to use its robot in American waters around Samoa, in the Pacific.
The company is hoping that its promise of limited ecological disruption will give it added appeal.
Competitors, like The Metals Company, use giant machines that roll along the seabed and suck up the nodules, a highly controversial technique.
Douglas McCauley, a marine biologist at the University of California, Santa Barbara, told AFP this method scoops up ocean floor using collectors or excavators, "a bit like bulldozers," he explained.
Everything is then brought up to ships, where the nodules are separated from waste, which is tossed back into the ocean.
This creates large plumes of sediment and toxins with a multitude of potential impacts, he said.
A less invasive approach, like that advocated by Impossible Metals, would reduce the risk of environmental damage, McCauley explained.
But he noted lighter-touch harvesting is not without risk.
The nodules themselves also harbor living organisms, and removing them even with a selective technique, involves destroying the habitat, he said.
Impossible Metals admits its technology cannot detect microscopic life, but the company claims to have a policy of leaving 60 percent of the nodules untouched.
McCauley is unconvinced, explaining "ecosystems in the deep ocean are especially fragile and sensitive."
"Life down there moves very slowly, so they reproduce very slowly, they grow very slowly."
Duncan Currie of the Deep Sea Conservation Coalition said it was impossible to assess the impact of any deep sea harvesting.
"We don't know enough yet either in terms of the biodiversity and the ecosystem down there," he told AFP.
According to the international scientific initiative Ocean Census, only 250,000 species are known, out of the two million that are estimated to populate the oceans.
- High demand -
Mining is "always going to have some impact," said Impossible Metals chief executive and co-founder Oliver Gunasekara, who has spent most of his career in the semiconductor field.
But, he added, "we need a lot more critical minerals, as we want to electrify everything."
Illustrating the global rush toward underwater mining, Impossible Metals has raised US$15 million from investors to build and test a first series of its Eureka 3 robot in 2026.
The commercial version will be the size of a shipping container and will expand from three to 16 arms, and its battery will grow from 14 to nearly 200 kilowatt-hours.
The robot will be fully autonomous and self-propel, without cables or tethers to the surface, and be equipped with sensors.
While awaiting the US green light, the company hopes to finalize its technology within two to three years, conduct ocean tests, build a fleet, and operate through partnerships elsewhere in the world.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab News
6 hours ago
- Arab News
People must see themselves in the AI revolution
President Donald Trump's historic visit to Saudi Arabia was not merely another high-profile diplomatic stop. It was a signal, one that reverberates far beyond ceremonial pageantry or economic accords. With a sweeping agenda anchored in regional security and technological advancement, the visit marked a profound turning point: the introduction of artificial intelligence as a centerpiece in reimagining international alliances and national futures. As Saudi Arabia deepens its strategic commitment to AI, the spotlight now turns to a less discussed — yet far more consequential — question: Who truly owns the AI revolution? For too long, the narrative has belonged to technologists. From Silicon Valley labs to national AI strategies, the story of AI has been told in the language of algorithms, architectures, and compute. And while the technical infrastructure is essential, we argue that such a narrow view of AI is not only incomplete, it is dangerous. When the American Institute of Artificial Intelligence and Quantum was launched in the US in 2016, the institutional landscape for AI was highly specialized. Data scientists, computer engineers, and mathematicians dominated the discourse. Policymakers and business leaders, overwhelmed by complexity, often stood at a distance. AI was regarded as something technical — a toolset, a model, an optimization system. The same pattern is now emerging in Saudi Arabia and across the Gulf. Government agencies are in search of use cases. Consultants are offering solutions in search of problems. Infrastructure projects are underway to create sovereign large language models and national AI platforms. In these efforts, AI is often reduced to a software engineering challenge — or worse, a procurement exercise. But this lens fails to capture the essence of the revolution underway. What's at stake is not simply how nations compute. It's how they think, organize, and act in a new age of machine cognition. We've long argued that AI cannot — and must not — be the exclusive domain of technologists. A true revolution occurs only when the masses engage. Just as the internet went mainstream not through protocols and standards, but through wide-scale adoption and imaginative use, AI must be demystified and integrated into the fabric of society. It is neither feasible nor necessary to turn an entire nation into data scientists. We need a nation of informed leaders, innovators, teachers, managers, and citizens who can speak the language of AI, not in code, but in context. This conviction led AIAIQ to become the world's first applied AI institute focused not on producing more PhDs, but on educating professionals across sectors — from finance and healthcare to logistics and public service. Our mission was clear: to build a movement of AI adoption engineering, centered on human understanding, social responsibility, and economic impact. History has shown that every technological revolution requires more than invention. It requires meaning. When the automobile first arrived in America, it was met with skepticism. Roads were unprepared. Public opinion was divided. Without storytelling, explanation, and cultural adaptation, the car might have remained a niche novelty. AI is no different, but the stakes are higher. Unlike past revolutions, AI directly threatens to reshape or eliminate jobs across virtually all sectors. It raises moral questions about decision-making, power, privacy, and the nature of intelligence itself. Without a serious effort to prepare populations, the result will be confusion, fear, and backlash. Adoption is not just about teaching Python or TensorFlow. It is about building cognitive readiness in society — a collective ability to make sense of AI as a force that operates both with us and around us. What's at stake is not simply how nations compute. It's how they think, organize, and act in a new age of machine cognition. Ali Naqvi and Mohammed Al-Qarni AIAIQ's work in the US, and now in the Kingdom, reflects this ethos. We don't approach AI as a product to be sold. We approach it as a paradigm to be understood, negotiated, and lived. Over nearly a decade of pioneering applied AI education, we've identified four essential elements for ensuring that technological revolutions — especially this one — take root meaningfully within society. People need help interpreting what AI actually is and how it is changing their world. It's not just a black box; it's a new kind of collaborator, a new model of thought. Technologies cannot remain in labs or behind firewalls. They must be translated into the language and workflow of everyday people. Mass understanding is more vital than mass compute. Every revolution carries moral implications. If not carefully navigated, AI can create a deep dissonance between traditional societal values and new forms of digital governance. Above all, people must see themselves in the revolution. They must feel empowered to participate, to lead, and to shape what comes next. Much has been made of 'sovereign AI' — the ambition of nations to build homegrown LLMs and nationalized data infrastructure. Several Gulf nations are investing heavily in this vision. And yet, we caution: True sovereignty is not measured by the size of your datacenter, but by the sophistication of your human capital. You can localize your AI stack, but unless you cultivate a generation of researchers, engineers, business innovators, and public thinkers, your systems will be technologically impressive but strategically hollow. Sovereignty is about stewardship. That requires education, experimentation, and the freedom to adapt. As Saudi Arabia targets massive economic transformation, the challenge is not just to build smart systems, but to build a smart society that knows what to do with them. President Trump's visit, and the unprecedented alignment between American and Saudi priorities around AI, is not just symbolic. It marks a deeper shift in how global partnerships are defined. Oil once defined alliances. Now, intelligence — both human and machine — will. For the first time, nations are collaborating not to dominate territory, but to co-develop cognition. The tools may be digital, but the outcome will be profoundly human. The alignment between global and local initiatives in Saudi Arabia represents a shared belief that the future is not only coded in silicon but shaped in classrooms, boardrooms, war rooms, and living rooms. The AI revolution is coming. But it must belong to the people. Otherwise, it will never become a revolution. • Mohammed Al-Qarni is a leading voice in AI policy and governance in the Gulf and Ali Naqvi is the founder of the American Institute of Artificial Intelligence and Quantum.


Asharq Al-Awsat
3 days ago
- Asharq Al-Awsat
In Canada Lake, Robot Learns to Mine without Disrupting Marine Life
Three robotic arms extended under the water in a Canadian lake, delicately selecting pebbles from the bed, before storing them back inside the machine. The exercise was part of a series of tests the robot was undergoing before planned deployment in the ocean, where its operators hope the machine can transform the search for the world's most sought-after metals. The robot was made by Impossible Metals, a company founded in California in 2020, which says it is trying to develop technology that allows the seabed to be harvested with limited ecological disruption. Conventional underwater harvesting involves scooping up huge amounts of material in search of potato-sized things called poly-metallic nodules. These nodules contain nickel, copper, cobalt, or other metals needed for electric vehicle batteries, among other key products. Impossible Metals' co-founder Jason Gillham told AFP his company's robot looks for the nodules "in a selective way." The prototype, being tested in the province of Ontario, remains stationary in the water, hovering over the lake bottom. In a lab, company staff monitor the yellow robot on screens, using what looks like a video game console to direct its movements. Using lights, cameras and artificial intelligence, the robot tries to identify the sought-after nodules while leaving aquatic life, such as octopuses' eggs, coral, or sponges, undisturbed. - 'A bit like bulldozers' - In a first for the nascent sector, Impossible Metals has requested a permit from US President Donald Trump to use its robot in American waters around Samoa, in the Pacific. The company is hoping that its promise of limited ecological disruption will give it added appeal. Competitors, like The Metals Company, use giant machines that roll along the seabed and suck up the nodules, a highly controversial technique. Douglas McCauley, a marine biologist at the University of California, Santa Barbara, told AFP this method scoops up ocean floor using collectors or excavators, "a bit like bulldozers," he explained. Everything is then brought up to ships, where the nodules are separated from waste, which is tossed back into the ocean. This creates large plumes of sediment and toxins with a multitude of potential impacts, he said. A less invasive approach, like that advocated by Impossible Metals, would reduce the risk of environmental damage, McCauley explained. But he noted lighter-touch harvesting is not without risk. The nodules themselves also harbor living organisms, and removing them even with a selective technique, involves destroying the habitat, he said. Impossible Metals admits its technology cannot detect microscopic life, but the company claims to have a policy of leaving 60 percent of the nodules untouched. McCauley is unconvinced, explaining "ecosystems in the deep ocean are especially fragile and sensitive." "Life down there moves very slowly, so they reproduce very slowly, they grow very slowly." Duncan Currie of the Deep Sea Conservation Coalition said it was impossible to assess the impact of any deep sea harvesting. "We don't know enough yet either in terms of the biodiversity and the ecosystem down there," he told AFP. According to the international scientific initiative Ocean Census, only 250,000 species are known, out of the two million that are estimated to populate the oceans. - High demand - Mining is "always going to have some impact," said Impossible Metals chief executive and co-founder Oliver Gunasekara, who has spent most of his career in the semiconductor field. But, he added, "we need a lot more critical minerals, as we want to electrify everything." Illustrating the global rush toward underwater mining, Impossible Metals has raised US$15 million from investors to build and test a first series of its Eureka 3 robot in 2026. The commercial version will be the size of a shipping container and will expand from three to 16 arms, and its battery will grow from 14 to nearly 200 kilowatt-hours. The robot will be fully autonomous and self-propel, without cables or tethers to the surface, and be equipped with sensors. While awaiting the US green light, the company hopes to finalize its technology within two to three years, conduct ocean tests, build a fleet, and operate through partnerships elsewhere in the world.


Asharq Al-Awsat
7 days ago
- Asharq Al-Awsat
Generative AI's Most Prominent Skeptic Doubles Down
Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation, AFP said. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."