Latest news with #V-JEPA
Yahoo
5 days ago
- Science
- Yahoo
Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits
Yann LeCun says there are four traits of human intelligence. Meta's chief AI scientist says AI lacks these traits, requiring a shift in training methods. Meta's V-JEPA is a non-generative AI model that aims to solve the problem. What do all intelligent beings have in common? Four things, according to Meta's chief AI scientist, Yann LeCun. At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM's AI leader, Anthony Annunziata. "There's four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans," he said. "Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically." LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That's why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said. "For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger," he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta. All those, however, are just "hacks," LeCun said. LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition. "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took," he said. But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction. Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video. "The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," he said. The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter. "We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials," he said. "Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we're interested in doing." That, in essence, is another way of saying we've learned to make sense of the physical world by creating hierarchies. Read the original article on Business Insider


India Today
6 days ago
- Science
- India Today
Meta AI chief says AI is good, but it does not have 4 key human traits yet
Artificial intelligence is becoming so good that many humans are beginning to wonder – and be genuinely concerned about – if AI is ready to replace them. While that may be true in some cases, Meta AI chief scientist Yann LeCun believes human-like AI is not quite there yet. LeCun is one of the three 'godfathers of AI' [the other two being Geoffrey Hinton and Yoshua Bengio] and therefore what he thinks of these things carries a lot of weight. One of the reasons why he feels AI – even with all the advancements – can't compete with people like you and I is that it still lacks four fundamental characteristics. Scientists have been trying to put these traits in them to achieve what is called general-purpose AI – in AI parlance – but apparently, there is no shortcut to human and animal to LeCun, 'Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically,' are the four key human traits that AI models, especially large language models (LLMs), don't have yet. That is not to say that companies (who are making these models) haven't thought about it, or they are not thinking about it. But all the existing attempts to bridge the gap between humans and AI have revolved around adding supplementary features to existing models."For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger,' he explained. RAG, which is short for retrieval augmented generation, is a technology that was pioneered by Meta – more specifically LeCun and Co. – to enhance LLM responses with external LeCun has dismissed current 'hacks' and instead advocated for an alternative approach using what he calls 'world-based models.' Defining this concept further, LeCun said, "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took,' and so, abstraction is the key for AI to accurately guess the infinite and unpredictable possibilities of the real world if it was to reach human-like intelligence. cMeta is actively exploring this approach with something called 'V-JEPA.' Released in February, it is a non-generative model that learns by predicting masked parts of a video. 'The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," LeCun elaborated, adding that hierarchical understanding was also fundamental to making sense of the physical world, a crucial element missing in current AI has been a staunch believer of AI becoming as intelligent as humans, though he has predicted that this would take time. Previously he has challenged the likes of Elon Musk who said, 'AI will probably be smarter than any single human by 2025 and by 2029, it is probably smarter than all humans combined.' At the same time, LeCun has tried to pacify his concerns about AI potentially taking over humanity saying, 'AI is not some sort of natural phenomenon that will just emerge and become dangerous.'

Business Insider
6 days ago
- Science
- Business Insider
Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits
What do all intelligent beings have in common? Four things, according to Meta's chief AI scientist, Yann LeCun. At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM's AI leader, Anthony Annunziata. "There's four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans," he said. "Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically." LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That's why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said. "For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger," he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta. All those, however, are just "hacks," LeCun said. LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition. "You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took," he said. But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction. Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video. "The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," he said. The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter. "We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials," he said. "Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we're interested in doing." That, in essence, is another way of saying we've learned to make sense of the physical world by creating hierarchies.