CSU-JPG

CSU-JPG / MIND

Public

The first open-domain closed-loop revisited benchmark for evaluating memory consistency and action control in world models.

43
0
100% credibility
Found Feb 11, 2026 at 22 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

MIND is an open benchmark with video datasets and evaluation tools for testing AI world models' memory consistency, visual quality, and action accuracy.

How It Works

1
📖 Discover MIND

You hear about MIND, a helpful collection of videos to test how well AI systems remember scenes and control movements in virtual worlds.

2
⬇️ Grab the video collection

You download the ready-made set of high-quality videos showing different views and actions from eight fun scenes.

3
📁 Organize your test videos

You sort your AI-generated video clips into folders matching the real ones, like first-person or third-person views.

4
🚀 Launch the evaluation

With one simple command, you start comparing your videos to the real ones to check memory and action accuracy.

5
Watch it work

The tool crunches through the videos on your computer, handling multiple at once if you have the power.

📊 Review your scores

You get a clear report with numbers on how well your AI remembers details, looks realistic, and matches actions perfectly.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 22 to 43 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is MIND?

MIND delivers the first open-domain closed-loop benchmark for probing memory consistency and action control in world models, using 250 high-quality 1080p@24FPS videos across eight scenes in first- and third-person views. You get a Hugging Face dataset with ground-truth actions, plus a Python CLI that evaluates your generated videos against it, spitting out JSON metrics like reconstruction errors, pose accuracy via ViPE, DINO features, and visual quality scores. It solves the lack of standardized tests for temporal stability and action generalization in dynamic environments.

Why is it gaining traction?

Unlike scattered evals, MIND offers a unified pipeline with multi-GPU parallel processing for fast batch runs—crank through test sets on 8 GPUs via simple flags like --num_gpus 8 --metrics lcm,action,dino. The shared action spaces across views hook devs building emergent mind github prototypes, bridging the github mind the gap in world model benchmarks. Free mind github access via HF datasets lowers barriers for quick experiments.

Who should use this?

World model researchers validating long-horizon predictions in agents. AI engineers at robotics labs testing video rollouts for memory drift or trajectory errors. Devs on hive mind github collabs or mind network github projects needing plug-and-play metrics beyond basic FID.

Verdict

Worth forking for world model evals—CLI and dataset make it instantly usable, though 19 stars and 1.0% credibility score reflect its fresh status with TODOs like leaderboards ahead. Solid first github repo for closed-loop benchmarks; iterate on it now.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.