Our New Model Helps AI Think Before it Acts


Today, we’re excited to share V-JEPA 2, our state-of-the-art world model, trained on video, that enables robots and other AI agents to understand the physical world and predict how it will respond to their actions. These capabilities are essential to building AI agents that can think before they act, and V-JEPA 2 represents meaningful progress toward our ultimate goal of developing advanced machine intelligence (AMI). 

As humans, we have the ability to predict how the physical world will evolve in response to our actions or the actions of others. For example, you know that if you toss a tennis ball into the air, gravity will pull it back down. When you walk through an unfamiliar crowded area, you’re making moves toward our destination while also trying not to bump into people or obstacles along the path. When playing hockey, you skate to where the puck is going, not where it currently is. We achieve this physical intuition by observing the world around us and developing an internal model of it, which we can use to predict the outcomes of hypothetical actions. 

V-JEPA 2 helps AI agents mimic this intelligence, making them smarter about the physical world. The models we use to develop this kind of intelligence in machines are called world models, and they enable three essential capabilities: understanding, predicting and planning.

Building on V-JEPA, our first model trained on video that we released last year, V-JEPA 2 improves understanding and predicting, enabling robots to interact with unfamiliar objects and environments to complete a task. 

We trained V-JEPA 2 using video, which helped the model learn important patterns in the physical world, including how people interact with objects, how objects move in the physical world and how objects interact with other objects. When deployed on robots in our labs, we found that robots can use V-JEPA 2 to perform tasks like reaching, picking up an object and placing an object in a new location.  

Today, in addition to releasing V-JEPA 2, we’re sharing three new benchmarks to help the research community evaluate how well their existing models learn and reason about the world using video. By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress – ultimately leading to better and more capable AI systems that will help enhance people’s lives.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Discover

Sponsor

spot_imgspot_img

Latest

moto G34 5G Unboxing and Quick Look – 120Hz📱 & 50MP 📷 @ ₹9,999 !

1188160 moto ka naya phone test karlia jaaye 🔥 Subscribe for Tabahi Videos Links : https://TechBurner.oia.bio/motog34 https://TechBurner.oia.bio/motog34-Flipkart #MotoG34 #FastNWow moto G34 5G Specs & Pricing : - 6.5-inches FHD 120Hz LCD...

INSIDE MUTTON | Mud Mutton Recipe | Clay Covered Full Goat and Cooking in Direct Fire

22186215 Today, we cook the whole lamb in clay. We use mud clay to cover the full lamb and cook it in direct fire. The...

Inspiring Story of Pragati Verma

In today’s time, social media is expanding very fast. Many people are making careers in the field of content creation and Pragati Verma...

The moment has arrived! Public Voting for the #NationalCreatorsAward is now LIV…

The moment has arrived! Public Voting for the #NationalCreatorsAward is now LIVE! Cast your vote for your favorite #ContentCreator in the Green Champion Award Category on...

SNAIL FRY | NATHAI VARUVAL | Snail Recipe Cooking and Eating in Village | Healthy Village Recipes

2098281 Discover the art of traditional Snail Fry (Nathai Varuval) cooking in the heart of the village! 🐌✨ This unique recipe combines fresh, locally sourced...