logo
#

Latest news with #VJEPA2

Meta shows off two big AI upgrades – one helps robots, the other helps you
Meta shows off two big AI upgrades – one helps robots, the other helps you

Phone Arena

time11 hours ago

  • Business
  • Phone Arena

Meta shows off two big AI upgrades – one helps robots, the other helps you

When we talk about AI, most people immediately think of Google or OpenAI, the company behind ChatGPT. But while those two often get all the headlines, Mark Zuckerberg's Meta has been moving fast, too – and now it is making some serious noise with two big AI the company behind Facebook, Instagram, WhatsApp and Threads, has just unveiled a new AI model that can actually think before it acts and a new video editing feature powered by AI. The latter is clearly aimed at the billions of people using Meta's platforms every day. But let's start with the one that sounds straight out of a sci-fi movie – the new model is called V-JEPA 2 and it is basically a brain for robots and AI agents. The idea? To help them understand the physical world and predict how it reacts to their actions, just like we humans do without even thinking about it. When we walk through a crowded space, we are constantly predicting what is about to happen – avoiding people, dodging obstacles and moving toward our goal. We don't pause to analyze every move, we just know what's likely to happen. Well, according to Meta, V-JEPA 2 is designed to teach AI that same kind of intuition by building what's called a world model. These world models allow AI to do three core things: understand, predict and plan. V-JEPA 2 is trained on video footage, which lets it learn how objects move, how people interact with those objects and how everything behaves in the physical world. It builds on the original V-JEPA model Meta dropped last year, but now it is better at understanding unfamiliar environments – like when a robot encounters a brand-new space or says it tested V-JEPA 2 in the lab and robots using the model were able to do stuff like reach out, grab things and move them around. That might sound basic, but in the world of robotics, that's a pretty big deal. Of course, Meta isn't the only company chasing this type of AI. Google launched its Gemini 2.0 model last year, focused on making AI better at reasoning, remembering and planning. OpenAI is also in the game with its own AI agent that can manage tasks for you. However, Meta seems to be leaning into helpful use cases – but at the end of the day, nobody really knows how this all plays out. It's clear we are heading into a future where AI doesn't just respond to prompts – it actually starts doing things for us. And yeah, it's both exciting and a little nerve-wracking. On one hand, these tools can help people who really need them. On the other, there is always that risk of us becoming too dependent. What happens when AI starts thinking instead of us? Moving on. While V-JEPA 2 is all about AI understanding the real world, Meta's second announcement is focused on how you can shape your digital one. It just rolled out a brand-new AI-powered video editing feature that is already live across the Meta AI app, website and a dedicated new app called Edits. This tool lets you remix short-form videos using preset prompts that can completely change your outfit, background, vibe – even the entire style of the clip. It's now available in the US and more than a dozen other countries. AI will help you edit your Reels. | Image credit – Meta Inspired by Meta's Movie Gen models, this feature is just the beginning. Meta says that later this year, you'll be able to use your own text prompts to edit videos exactly how you want, directly alongside Meta AI. The editing process is simple: upload a video to one of the supported platforms, then browse through more than 50 editing prompts. For now, you can transform up to 10 seconds of your video for free – but that's a limited-time thing. For now, you get 50 editing prompts. | Image credit – Meta You can turn your clip into a retro comic book scene, complete with vintage-style illustrations. Or change up the mood of a cloudy video with dreamy sparkles and soft-focus lighting. You can even make it feel like a neon-soaked video game, with your clothes and environment matching the done, you can share your creation straight to Facebook or Instagram from the Meta AI app or Edits. If you're on or using the app, you can also post to the Discover feed. And while this might sound like fun – and yeah, it definitely is – it's also another reminder of where we are headed. Just like Google's new Flow tool that can generate hyper-realistic videos, these kinds of AI-driven editors can blur the line between what's real and what's not. We've already seen deepfake-style videos go viral and trick people. And sure, Meta's tool is meant for creative edits, not deception – but I think it's still a step down that same path.

Meta Says Its New AI Model Understands Physical Rules Like Gravity
Meta Says Its New AI Model Understands Physical Rules Like Gravity

CNET

timea day ago

  • Science
  • CNET

Meta Says Its New AI Model Understands Physical Rules Like Gravity

A new generative AI model Meta released this week could change how machines understand the physical world, opening up opportunities for smarter robots and more, the company said. The new open-source model, called Video Joint Embedding Predictive Architecture 2, or V-JEPA 2, is designed to help artificial intelligence understand things like gravity and object permanence, Meta said. "By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress," the company said in a blog post, "ultimately leading to better and more capable AI systems that will help enhance people's lives." Current models that allow AI to interact with the physical world rely on labeled data or video to mimic reality, but this approach emphasizes the logic of the physical world, including how objects move and interact. The model could allow AI to understand concepts like the fact that a ball rolling off of a table will fall. Meta said the model could be useful for devices like autonomous vehicles and robots by ensuring they don't need to be trained on every possible situation. The company called it a step toward AI that can adapt like humans can. One struggle in the space of physical AI has been the need for significant amounts of training data, which takes time, money and resources. At SXSW earlier this year, experts said synthetic data -- training data created by AI -- could help prepare a more traditional learning model for unexpected situations. (In Austin, the example used was the emergence of bats from the city's famed Congress Avenue Bridge.) Meta said its new model simplifies the process and makes it more efficient for real-world applications because it doesn't rely on all of that training data. The next steps for world models include training models that are capable of learning, reasoning and planning across different time and space scales, making them better at breaking down complicated tasks. Multimodal models, that can use other senses like audio and touch in addition to vision, will also help future AI models understand the real world.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store