The first open-domain closed-loop revisited benchmark for evaluating memory consistency and action control in world models.
MIND is an open benchmark with video datasets and evaluation tools for testing AI world models' memory consistency, visual quality, and action accuracy.
How It Works
You hear about MIND, a helpful collection of videos to test how well AI systems remember scenes and control movements in virtual worlds.
You download the ready-made set of high-quality videos showing different views and actions from eight fun scenes.
You sort your AI-generated video clips into folders matching the real ones, like first-person or third-person views.
With one simple command, you start comparing your videos to the real ones to check memory and action accuracy.
The tool crunches through the videos on your computer, handling multiple at once if you have the power.
You get a clear report with numbers on how well your AI remembers details, looks realistic, and matches actions perfectly.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.