logo
#

Latest news with #WebSummit

Generative AI's most prominent skeptic doubles down
Generative AI's most prominent skeptic doubles down

Time of India

timea day ago

  • Business
  • Time of India

Generative AI's most prominent skeptic doubles down

HighlightsGary Marcus remains a prominent skeptic of generative artificial intelligence, arguing that the large language models which underpin this technology are fundamentally flawed and will not deliver on the promises made by Silicon Valley advocates. Despite the hype surrounding generative AI and its rapid valuation growth, Marcus emphasizes that practical applications remain limited, primarily benefiting coding assistance and office tasks, while warning of the technology's tendency to produce confident-sounding mistakes known as hallucinations. Marcus advocates for a neurosymbolic approach to artificial intelligence, which focuses on rebuilding human logic rather than relying solely on vast datasets, cautioning that the current emphasis on large language models may delay advancements toward achieving true human-level intelligence. Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley 's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI , the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models ( LLMs ) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI , an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

Generative AI's most prominent skeptic doubles down
Generative AI's most prominent skeptic doubles down

Time of India

time2 days ago

  • Business
  • Time of India

Generative AI's most prominent skeptic doubles down

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley 's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like O carro de Paolla Oliveira choca o mundo inteiro, a prova em fotos! 33 Bridges Undo "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI , the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. Live Events The large language models ( LLMs ) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

Pitches, networking, deal-making: Web Summit thrives in Vancouver
Pitches, networking, deal-making: Web Summit thrives in Vancouver

CTV News

time2 days ago

  • Business
  • CTV News

Pitches, networking, deal-making: Web Summit thrives in Vancouver

The Vancouver Convention Centre has been full of the tech industry's biggest movers and shakers this week. On the penultimate day of Web Summit, the mammoth tech conference saw thousands of attendees flowing between expert panels and a bustling exhibition hall at the Vancouver Convention Centre. Organizers have now confirmed 50 countries sent government-backed delegations to the summit, a record for the event marking its first of three years in Vancouver. The first Web Summit was in Dublin in 2009 with just 150 attendees, and now four events are held around the globe each year. The Polish and Qatari delegations have massive footprints, while Hong Kong and Greece, for example, have smaller dedicated booths. 'We wanted you to see the response of the Canadian market as we have a quite good sample from the European market,' said Georgios Karakatsanis, a Greek entrepreneur with Evotropia. 'We wanted to see the response of markets that are really environmentally friendly and have similar policies to the European Union.' For Yarn Guru, a Pakistani company using artificial intelligence to match textiles with potential buyers, the event was an opportunity to connect with North American clients. 'Our customers are people who need help finding products globally,' said Akhlaq Rana. 'We use artificial intelligence to find the thickness they want, length, various details which can be very hard to find.' Canadian entrepreneurs make their case Startups and established companies from across Canada represented a significant portion of the booths in the exhibition hall, some of them having attended the conference in Toronto, when it was named Collision. 'It is a great platform for startups and entrepreneurs like myself to come and help get the amplification we need to move and scale our product,' said Toronto's Maman Khalid, with cleantech AI marketplace RevoHubInc. For Vancouver-based ImmerRock, a mixed-reality app for learning guitar, the company gained as much from networking within the field as pitching to potential investors. 'It's helpful to know what everybody else is doing in the ecosystem, kind of get to know each other and share some of the problems and solutions that we all kind of have to encounter,' said Evanna Roman. The representatives weren't all software-oriented, however. Seawise Packaging was there with a cold-packaging alternative to styrofoam developed in Prince George for a company in Campbell River. '(Being here) opens our eyes so we also get some insightful ideas from (other companies) to improve our product and also we get to connect with some customers,' said Banchi Kassuan, the company's product researcher. American companies were the second-most common after Canadians in the exhibition hall, but many of the delegates from Europe and Asia have said that making connections in Canada are important as they try to mitigate — or outright avoid — issues with the United States' erratic trade war, while still doing business in North America. 'I think Canada has a lot to gain by getting a lot of people come work in Canada rather than the United States,' said economist William Lazonick, at a Web Summit news conference. 'I think the government policy needs to kind of be ready for that, and corporations have to be doing their part and towards investment.'

Generative AI's most prominent sceptic doubles down
Generative AI's most prominent sceptic doubles down

The Star

time2 days ago

  • Business
  • The Star

Generative AI's most prominent sceptic doubles down

VANCOUVER: Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great sceptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm sceptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His scepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to US$300bil, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI – one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him US$100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product." – AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store