logo
#

Latest news with #skepticism

Generative AI's Most Prominent Skeptic Doubles Down
Generative AI's Most Prominent Skeptic Doubles Down

Asharq Al-Awsat

time3 days ago

  • Business
  • Asharq Al-Awsat

Generative AI's Most Prominent Skeptic Doubles Down

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation, AFP said. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

Generative AI's most prominent skeptic doubles down
Generative AI's most prominent skeptic doubles down

News.com.au

time4 days ago

  • Business
  • News.com.au

Generative AI's most prominent skeptic doubles down

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." - 'Right answers matter' - Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

‘My Father Told Me...': RFK Jr. Makes Wild Warning Undermining Expert Health Advice
‘My Father Told Me...': RFK Jr. Makes Wild Warning Undermining Expert Health Advice

Yahoo

time23-05-2025

  • Health
  • Yahoo

‘My Father Told Me...': RFK Jr. Makes Wild Warning Undermining Expert Health Advice

Health and Human Services Secretary Robert F. Kennedy Jr. on Thursday said assessing health guidance is similar to researching baby strollers as a new mom, urging Americans to 'be skeptical of authority' while serving in a top Cabinet position. CNN's Kaitlan Collins asked Kennedy if he stood by his earlier comment that people should not be taking medical advice from him, even though his job involves communicating health guidance and recommendations based on his department's expertise. 'Yeah, absolutely,' Kennedy said. 'I'm somebody who is not a physician... and they should also be skeptical about any medical advice. They need to do their own research.' Kennedy added that when 'you're a mom, you do your own research on your baby carriage, on your baby bottles, on your baby formula,' suggesting a similar approach should be taken when assessing medical advice. When Collins pointed out that most mothers do not have medical degrees and would rather rely on their physicians, Kennedy claimed that health experts in a democracy 'are subject to all kinds of biases.' 'One of the responsibilities of living in a democracy is to do your own research and to make up your own mind,' he added. Kennedy also recalled a piece of advice from his father, suggesting it was relevant to their discussion. 'I would say, be skeptical of authority. My father told me that when I was a young kid, people in authority lie,' Kennedy said, baselessly claiming that 'critical thinking was shut down' during the COVID-19 pandemic. Kennedy, a prominent vaccine skeptic, was nominated to serve in one of the country's top jobs by President Donald Trump, raising eyebrows during a House subcommittee hearing last week with his answer to a question about whether he would vaccinate his children against measles if they were still young. 'I don't think people should be taking advice, medical advice from me,' he said. 'I think if I answer that question directly that it will seem like I'm giving advice to other people, and I don't want to be doing that,' he continued. Kennedy, though, has not held back from lending credence to debunked conspiracy theories, including falsely suggesting that vaccines are linked to autism. While his Making America Healthy Again report, released on Thursday, did not touch on that specific claim, it still hinted that the growth of the immunization schedule for children may be detrimental to them, even though childhood vaccination saves millions of lives every year. 'Vaccines benefit children by protecting them from infectious diseases. But as with any medicine, vaccines can have side effects that must be balanced against their benefits,' the report reads. 'Parents should be fully informed of the benefits and risks of vaccines.' RFK Jr.'s MAHA Report Goes After Vaccines, Prescription Meds, Food Supply RFK Jr.'s MAHA Report Raises Concerns About Vaccines, U.S. Foods And Prescription Drugs RFK Jr. Has A Meltdown After Democrat Asks Him 1 Simple Question

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store