logo
Firms and Researchers at Odds over Superhuman AI

Firms and Researchers at Odds over Superhuman AI

Asharq Al-Awsat27-03-2025

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.
The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction, AFP said.
"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".
Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.
Others, though are more skeptical.
Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.
LeCun's view appears backed by a majority of academics in the field.
Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.
'Genie out of the bottle'
Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.
Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.
"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."
Skepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.
"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.
A similar, more recent thought experiment is the "paperclip maximiser".
This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.
While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.
Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.
He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.
'Biggest thing ever'
The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility program at Britain's Cambridge University.
"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.
Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.
"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".
The challenge can lie in communicating these ideas to politicians and the public.
Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How AI chatbot Grok sowed misinformation during India-Pakistan military conflict
How AI chatbot Grok sowed misinformation during India-Pakistan military conflict

Arab News

time19 hours ago

  • Arab News

How AI chatbot Grok sowed misinformation during India-Pakistan military conflict

WASHINGTON, US: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan air base during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

Google judge mulls softer remedies in US search antitrust case
Google judge mulls softer remedies in US search antitrust case

Arab News

time3 days ago

  • Arab News

Google judge mulls softer remedies in US search antitrust case

WASHINGTON: A federal judge in Washington suggested on Friday he is considering making Alphabet's Google take less aggressive measures to restore competition in online search than the 10-year regime proposed by antitrust enforcers. US District Judge Amit Mehta heard closing arguments on Friday at a trial on proposals to address Google's illegal monopoly in online search and related advertising. 'Ten years may seem like a short period, but in this space, a lot can change in weeks,' he said, citing recent developments such as ChatGPT maker OpenAI buying a device startup. The DOJ and a coalition of states want Google to share search data and cease multibillion-dollar payments to Apple and other smartphone makers to be the default search engine on new devices. At the hearing, the judge floated the possibility of limited data sharing and ending the payments only if other measures do not increase competition. He also grappled with the rise of artificial intelligence products that could replace traditional search engines. An alternate default search engine in Apple's Safari browser is unlikely to come from existing rival search engines like DuckDuckGo or Bing, the judge said. 'If anything it's going to be one of these AI companies that can do more than just search. And why? Because maybe people don't want 10 blue links anymore,' he said, referring to earlier iterations of Google's search engine. The case has already rattled Google's share price by exposing Apple's plans to offer AI-based search options. The trial began in April and Judge Mehta has said he aims to rule by August. AI 'rivals'? Antitrust enforcers are concerned about how Google's search monopoly gives it an advantage in AI products like Gemini and vice versa. Nick Turley, OpenAI's product head for ChatGPT, testified that the ChatGPT creator is years away from its goal of being able to use its own search technology to answer 80 percent of queries and that having access to Google search data would help it focus on improving ChatGPT. Turley also said OpenAI would be interested in buying Chrome if Google is forced to sell it. But Mehta questioned whether companies like OpenAI or Perplexity should be considered Google competitors who would be given access to any data Google is required to share, given that the case focused on search engine competitors. 'It seems to me you now want to kind of bring this other technology into the definition of general search engine markets that I am not sure quite fits,' the judge said to DOJ attorney Adam Severt. Severt replied that while the first part of the case focused on the past, the remedies must be forward-looking. John Schmidtlein, an attorney for Google, said at the hearing that while generative AI is influencing how search looks, Google has addressed any concerns about competition in AI by no longer entering exclusive agreements with wireless carriers and smartphone makers including Samsung Electronics , leaving them free to load rival search and AI apps on new devices. Schmidtlein argued it would be inappropriate to give successful AI companies like OpenAI technology that Google has spent 20 years perfecting. 'Coming to Google and asking Google for a handout when they are the market leader seems completely disproportionate to what this case is about,' he said.

Saudi Arabia: The Next Silicon Valley of the Middle East
Saudi Arabia: The Next Silicon Valley of the Middle East

Leaders

time3 days ago

  • Leaders

Saudi Arabia: The Next Silicon Valley of the Middle East

Saudi Arabia is rapidly positioning itself as the Middle East's Silicon Valley, with a $40 billion AI investment push, aiming to transform the country into a competitive innovation hub. Consequently, it is attracting a large-scale influx of global talent as the Saudi government is partnering directly with global leaders in new-age tech to form the bedrock for local innovation. Institutional alignment, sovereign capital, and regulatory clarity make the country an execution-first environment for AI and Web3 startups. The Kingdom is making strategic moves to build its own AI infrastructure through sovereign initiatives like HUMAIN which ensure data sovereignty, strategic control, and scalable impact. Moreover, they align with national values and priorities, serving as a template for emerging economies. Experts believe this approach will help countries model their digital futures instead of outsourcing them. Johanna Cabildo, CEO of D-GN, noted, 'Saudi Arabia is building infrastructure, not just regulation.' When a government funds and deploys AI across ministries, oil giants, and smart cities, it creates demand signals, attracting private innovators, fostering a vibrant tech ecosystem. Cabildo cited Saudi Aramco's engagement with US-based droppGroup as a smart measure to power local innovation. Vision 2030: A Magnet for Global AI Builders Saudi Arabia's big AI push aligns with its 'Vision 2030,' turning the country into a magnet for global AI builders. Experts say Saudi Arabia is one of the few markets with both political will and economic scale to support the next wave of AI and Web3. The Saudi market is seeing a healthy combination of technological convergence and active support from local regulators. Jaspreet Bindra, Co-founder of AI&Beyond, praised the kingdom's future-facing approach. Cabildo emphasized that clear regulations and fast onboarding frameworks will attract founders and become a launchpad for global ventures. These ventures will focus on data sovereignty, digital identity, and decentralized intelligence. The Saudi Capital Market Authority has already issued over 80 tokenization licenses, signaling a push to innovate locally. Blockchain adds the transparency and ownership that AI needs, further enhancing the tech ecosystem. From Consumers to Producers of Innovation Saudi Arabia, along with countries like the UAE, is stepping out from being a consumer of innovation to becoming a global producer. This shift could turn the kingdom into the new center of gravity for AI, where capital, talent, and emerging technologies collide. Cabildo described Saudi Arabia's $40 billion AI push as more than an investment; it's a declaration of intent to build a unique ecosystem. Collaborations such as Saudi Aramco's mega AI and blockchain deal with droppGroup are expected to significantly impact traditional sectors. In the energy sector, AI can revolutionize operations with predictive maintenance, saving billions by preventing outages. Moreover, in logistics, AI-driven systems will streamline supply chains, cutting delays at ports like Jeddah. In education, AI will personalize learning, equipping students for a digital economy. The real transformation will be in the workforce, with Vision 2030's Human Capability Development Programme shifting workers from routine tasks to knowledge-driven roles. Deals like the Aramco-droppGroup collaboration fuel this shift, creating demand for data scientists, AI engineers, and innovation managers. These roles are critical to a diversified economy. With such moves, Saudi Arabia aims to generate thousands of tech jobs by 2030, reducing oil dependency. Short link : Post Views: 13

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store