Latest news with #LeCun
Yahoo
5 days ago
- Science
- Yahoo
Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits
Yann LeCun says there are four traits of human intelligence. Meta's chief AI scientist says AI lacks these traits, requiring a shift in training methods. Meta's V-JEPA is a non-generative AI model that aims to solve the problem. What do all intelligent beings have in common? Four things, according to Meta's chief AI scientist, Yann LeCun. At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM's AI leader, Anthony Annunziata. "There's four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans," he said. "Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically." LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That's why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said. "For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger," he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta. All those, however, are just "hacks," LeCun said. LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition. "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took," he said. But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction. Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video. "The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," he said. The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter. "We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials," he said. "Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we're interested in doing." That, in essence, is another way of saying we've learned to make sense of the physical world by creating hierarchies. Read the original article on Business Insider


India Today
6 days ago
- Science
- India Today
Meta AI chief says AI is good, but it does not have 4 key human traits yet
Artificial intelligence is becoming so good that many humans are beginning to wonder – and be genuinely concerned about – if AI is ready to replace them. While that may be true in some cases, Meta AI chief scientist Yann LeCun believes human-like AI is not quite there yet. LeCun is one of the three 'godfathers of AI' [the other two being Geoffrey Hinton and Yoshua Bengio] and therefore what he thinks of these things carries a lot of weight. One of the reasons why he feels AI – even with all the advancements – can't compete with people like you and I is that it still lacks four fundamental characteristics. Scientists have been trying to put these traits in them to achieve what is called general-purpose AI – in AI parlance – but apparently, there is no shortcut to human and animal to LeCun, 'Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically,' are the four key human traits that AI models, especially large language models (LLMs), don't have yet. That is not to say that companies (who are making these models) haven't thought about it, or they are not thinking about it. But all the existing attempts to bridge the gap between humans and AI have revolved around adding supplementary features to existing models."For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger,' he explained. RAG, which is short for retrieval augmented generation, is a technology that was pioneered by Meta – more specifically LeCun and Co. – to enhance LLM responses with external LeCun has dismissed current 'hacks' and instead advocated for an alternative approach using what he calls 'world-based models.' Defining this concept further, LeCun said, "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took,' and so, abstraction is the key for AI to accurately guess the infinite and unpredictable possibilities of the real world if it was to reach human-like intelligence. cMeta is actively exploring this approach with something called 'V-JEPA.' Released in February, it is a non-generative model that learns by predicting masked parts of a video. 'The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," LeCun elaborated, adding that hierarchical understanding was also fundamental to making sense of the physical world, a crucial element missing in current AI has been a staunch believer of AI becoming as intelligent as humans, though he has predicted that this would take time. Previously he has challenged the likes of Elon Musk who said, 'AI will probably be smarter than any single human by 2025 and by 2029, it is probably smarter than all humans combined.' At the same time, LeCun has tried to pacify his concerns about AI potentially taking over humanity saying, 'AI is not some sort of natural phenomenon that will just emerge and become dangerous.'

Business Insider
6 days ago
- Science
- Business Insider
Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits
What do all intelligent beings have in common? Four things, according to Meta's chief AI scientist, Yann LeCun. At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM's AI leader, Anthony Annunziata. "There's four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans," he said. "Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically." LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That's why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said. "For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger," he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta. All those, however, are just "hacks," LeCun said. LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition. "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took," he said. But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction. Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video. "The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," he said. The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter. "We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials," he said. "Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we're interested in doing." That, in essence, is another way of saying we've learned to make sense of the physical world by creating hierarchies.

Business Insider
28-04-2025
- Science
- Business Insider
Meta's chief AI scientist says scaling AI won't make it smarter
For years the AI industry has abided by a set of principles known as "scaling laws." OpenAI researchers outlined them in the seminal 2020 paper, "Scaling Laws for Neural Language Models." "Model performance depends most strongly on scale, which consists of three factors: the number of model parameters N (excluding embeddings), the size of the dataset D, and the amount of compute C used for training," the authors wrote. In essence, more is more when it comes to building highly intelligent AI. This idea has fueled huge investments in data centers that allow AI models to process and learn from huge amounts of existing information. But recently, AI experts across Silicon Valley have started to challenge that doctrine. "Most interesting problems scale extremely badly," Meta's chief AI scientist, Yann LeCun, said at the National University of Singapore on Sunday. "You cannot just assume that more data and more compute means smarter AI." LeCun's point hinges on the idea that training AI on vast amounts of basic subject matter, like internet data, won't lead to some sort of superintelligence. Smart AI is a different breed. "The mistake is that very simple systems, when they work for simple problems, people extrapolate them to think that they'll work for complex problems," he said. "They do some amazing things, but that creates a religion of scaling that you just need to scale systems more and they're going to naturally become more intelligent." Right now, the impact of scaling is magnified because many of the latest breakthroughs in AI are actually "really easy," LeCun said. The biggest large language models today are trained on roughly the amount of information in the visual cortex of a four-year-old, he said. "When you deal with real-world problems with ambiguity and uncertainty, it's not just about scaling anymore," he added. AI advancements have been slowing lately. This is due, in part, to a dwindling corpus of usable public data. LeCun is not the only prominent researcher to question the power of scaling. Scale AI CEO Alexandr Wang said scaling is "the biggest question in the industry" at the Cerebral Valley conference last year. Cohere CEO Aidan Gomez called it the "dumbest" way to improve AI models. LeCun advocates for a more world-based training approach. "We need AI systems that can learn new tasks really quickly. They need to understand the physical world — not just text and language but the real world — have some level of common sense, and abilities to reason and plan, have persistent memory — all the stuff that we expect from intelligent entities," he said during his talk Sunday. Last year, on an episode of Lex Fridman's podcast, LeCun said that in contrast to large language models, which can only predict their next steps based on patterns, world models have a higher level of cognition. "The extra component of a world model is something that can predict how the world is going to evolve as a consequence of an action you might take."
Yahoo
20-04-2025
- Politics
- Yahoo
Meta's chief AI scientist calls French initiative to attract US scientists a 'smart move'
Meta's AI chief praised France's initiative to attract top scientists and engineers. France is increasing funding for universities and research organizations to entice foreign talent. Trump, meanwhile, has tightened immigration and cut research funding in the United States. Since taking office in January, President Donald Trump has tightened immigration controls, cut funding for government grants and research, reduced staffing at NASA and NOAA, and attacked top universities. France seems to have sensed an opportunity. The National Research Agency, part of the Education Ministry, announced on Friday a "Choose France for Science" initiative to attract scientists from abroad, opening up more government funding for universities, schools, and research organizations to entice foreign talent. "As the international context creates the conditions for an unprecedented wave of mobility among researchers around the world, France aims to position itself as a host country for those wishing to continue their work in Europe, drawing on the country's research ecosystem and infrastructure," the agency said in a statement. In a LinkedIn post, French President Emmanuel Macron said that research is a "priority." "Researchers from around the world, choose France, choose Europe!" he wrote. Meta's chief AI scientist, Yann LeCun, who was born in France, responded to the announcement on Saturday, calling the initiative a 'smart move.' LeCun has criticized Trump for targeting public research funding. Last month, he wrote on LinkedIn that the "US seems set on destroying its public research funding system. Many US-based scientists are looking for a Plan B." In that same post, he told European countries, "You may have an opportunity to attract some of the best scientists in the world." LeCun is not the only tech leader to criticize the Trump administration's policy decisions regarding science, research, and education. Last week, former Google CEO Eric Schmidt said the administration has launched a "total attack on all of science in America." Speaking at the AI+Biotechnology Summit, Schmidt said he knew people in the tech space who planned to return to London because "they don't want to work in this environment." Read the original article on Business Insider