logo
Generative AI's most prominent skeptic doubles down

Generative AI's most prominent skeptic doubles down

Japan Today2 days ago

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.
Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.
Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.
"Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.
Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.
The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.
"I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."
His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.
Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.
That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.
Yet for all the hype, the practical gains remain limited.
The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.
Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.
"One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.
This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."
Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.
He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."
This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.
Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.
Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."
Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.
"The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize."
Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.
"They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
© 2025 AFP

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google makes case for keeping Chrome browser
Google makes case for keeping Chrome browser

Japan Today

time20 hours ago

  • Japan Today

Google makes case for keeping Chrome browser

The US antitrust case poised to weaken Google's dominance in online search comes as OpenAI, Perplexity and other rivals are putting generative artificial intelligence to work fetching information from the internet for users By Thomas URBAIN Google on Friday urged a U.S. judge to reject the notion of making it spin off its Chrome browser to weaken its dominance in online search. Rival attorneys made their final arguments before U.S. District Court Judge Amit Mehta, who is considering "remedies" to impose after making a landmark decision last year that Google maintained an illegal monopoly in search. U.S. government attorneys have called on Mehta to order Google divest itself of Chrome browser, contending that artificial intelligence is poised to ramp up the tech giant's dominance as the go-to window into the internet. They also want Google barred from agreements with partners such as Apple and Samsung to distribute its search tools, which was the focus of the suit against the Silicon Valley internet giant. Three weeks of testimony ended early in May, with Friday devoted to rival sides parsing points of law and making their arguments before Mehta in a courtroom in Washington. John Schmidtlein, an attorney for Google, told Mehta that there was no evidence presented showing people would have opted for a different search engine if no exclusivity deals had been in place. Schmidtlein noted that Verizon installed Chrome on smartphones even though the U.S. telecom titan owned Yahoo! search engine and was not bound by a contract with Google. Of the 100 or so witnesses heard at trial, not one said "if I had more flexibility, I would have installed Bing" search engine from Microsoft, the Google attorney told the judge. Department of Justice attorney David Dahlquist countered that Apple, which was paid billions of dollars to make Chrome the default browser on iPhones, "repeatedly asked for more flexibility" but was denied by Google. Google contends that the United States has gone way beyond the scope of the suit by recommending a spinoff of Chrome, and holding open the option to force a sale of its Android mobile operating system. "Forcing the sale of Chrome or banning default agreements wouldn't foster competition," said Cato Institute senior fellow in technology policy Jennifer Huddleston. "It would hobble innovation, hurt smaller players, and leave users with worse products." The potential of Chrome being weakened or spun off comes as rivals such as Microsoft, ChatGPT and Perplexity put generative artificial intelligence (AI) to work fetching information from the internet in response to user queries. The online search antitrust suit was filed against Google some five years ago, before ChatGPT made its debut, triggering AI fervor. Google is among the tech companies investing heavily to be a leader in AI, and is weaving the technology into search and other online offerings. Testimony at trial included Apple vice president of services Eddy Cue revealing that Google's search traffic on Apple devices declined in April for the first time in over two decades. Cue testified that Google was losing ground to AI alternatives like ChatGPT and Perplexity. Mehta pressed rival attorneys regarding the potential for Google to share data as proposed by the DoJ in its recommended remedies. "We're not looking to kneecap Google," DoJ attorney Adam Severt told the judge. "But, we are looking to make sure someone can compete with Google." Schmidtlein contended that the data Google is being asked to share contains much more than just information about people's online searches, saying it would be tantamount to handing over the fruit of investments made over the course of decades. "There are countless algorithms that Google engineers have invented that have nothing to do with click and query data," Schmidtlein said. "Their remedy says we want to be on par with all of your ingenuity, and, respectfully your honor, that is not proportional to the conduct of this case." © 2025 AFP

Generative AI's most prominent skeptic doubles down
Generative AI's most prominent skeptic doubles down

Japan Today

time2 days ago

  • Japan Today

Generative AI's most prominent skeptic doubles down

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product." © 2025 AFP

Chinese AI startup DeepSeek pushes US rivals with R1 model upgrade
Chinese AI startup DeepSeek pushes US rivals with R1 model upgrade

Nikkei Asia

time2 days ago

  • Nikkei Asia

Chinese AI startup DeepSeek pushes US rivals with R1 model upgrade

SHANGHAI/BEIJING (Reuters) -- Chinese artificial intelligence startup DeepSeek released the first update to its hit R1 reasoning model in the early hours of Thursday, stepping up competition with U.S. rivals such as OpenAI. DeepSeek said via developer platform Hugging Face that R1-0528 was a minor version upgrade of R1 that nevertheless significantly improved its depth of reasoning and inference capabilities, including better handling of complex tasks, bringing its performance closer to OpenAI's o3 reasoning models and Google's Gemini 2.5 Pro.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store