project-instinct

mjlab-native port of InstinctLab for humanoid RL and Project-Instinct workflows.

63
1
100% credibility
Found Mar 10, 2026 at 49 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

InstinctMJ offers simulation environments and tools for training AI to control humanoid robots in tasks like walking, mimicking motions, perception, and parkour using a physics simulator.

How It Works

1
🔍 Discover humanoid robot training

You find this project while exploring ways to teach robots to walk, jump, and navigate like humans.

2
📥 Gather your tools

Download the necessary pieces into a shared folder on your computer.

3
🔧 Set everything up

Run a simple install command to prepare your robot training playground.

4
🚀 Launch a training session

Pick a fun task like flat walking or parkour and watch your robot start learning with one command.

5
👀 Observe the progress

See your robot improve step by step, getting better at moving naturally.

6
▶️ Replay and test

Load a trained robot and play it back to enjoy the smooth motions.

🏆 Master robot control

Celebrate as your humanoid robot performs impressive tasks like parkour effortlessly!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 49 to 63 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is InstinctMJ?

InstinctMJ is a Python package, the mjlab-native port of InstinctLab, delivering humanoid RL environments for Project-Instinct workflows in MuJoCo Warp simulations. It provides task suites like locomotion, motion shadowing, perceptive navigation, and parkour for Unitree G1 robots, with motion reference support and seamless integration into instinct_rl for train/play/export pipelines. Users get CLI commands like `instinct-train Instinct-Locomotion-Flat-G1-v0` and pretrained weights for instant policy testing.

Why is it gaining traction?

It bridges IsaacLab users to faster mjlab-native sims without rewriting workflows, offering structured logs and ONNX export for deployment. The unified ecosystem—standalone tasks, motion data loaders, and debug viz—speeds prototyping over fragmented RL setups.

Who should use this?

RL researchers training humanoid whole-body controllers, like locomotion or parkour on G1 bots. Suited for teams migrating InstinctLab pipelines to MuJoCo for better performance in perception-heavy tasks.

Verdict

Promising early entry (48 stars, 1.0% credibility) with strong README quickstarts and pretrained models, but non-commercial license limits enterprise use. Grab it for mjlab-native humanoid RL experiments today.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.