MINT-SJTU

MINT-SJTU / Evo-RL

Public

We release Evo-RL, the opensource real-world offline RL on So-101 and AgileX PiPER for easier reproduction.

79
2
100% credibility
Found Mar 05, 2026 at 79 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Evo-RL is an open-source framework for real-world reinforcement learning on SO101 robots, providing pipelines for data collection, value function training, advantage computation, and iterative policy improvement.

How It Works

1
🔍 Discover Evo-RL

You find Evo-RL, an easy way to teach real robots new skills through trial and improvement.

2
⚙️ Connect your robot

Hook up your SO101 robot arm and leader controller so it follows your hand movements.

3
📹 Record demonstrations

Guide the robot through tasks while it records your smooth motions to learn from.

4
📊 Train value estimates

The system learns to score how good each moment in your recordings is.

5
🔄 Improve your data

Value scores get added to recordings, highlighting the best parts for training.

6
🧠 Train smarter policies

Your robot learns policies that focus on high-value actions from the improved data.

🚀 Robot skills improve

Deploy the policy, collect better data, and repeat to make your robot a pro.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 79 to 79 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Evo-RL?

Evo-RL delivers a full offline RL pipeline for real-world robotics on SO101 arms and AgileX PiPER bases, built in Python atop LeRobot. You collect teleop or human-in-the-loop data, train value functions like Pi*0.6, infer advantages, and fine-tune policies via simple CLI commands like `lerobot-human-inloop-record` and `lerobot-train`. It closes the sim-to-real gap with reproducible evo rl workflows, pushing iterative improvements on physical hardware.

Why is it gaining traction?

Unlike fragmented RL repos, Evo-RL offers end-to-end CLI reproducibility for evo rl evolutionary driven reinforcement learning, from data to deployment, with HF dataset/model integration coming soon. Developers dig the human-in-the-loop rollouts and advantage tags that boost policy success rates without custom scripting. GitHub release stats show steady baselines, making it a quick evo rc starter over rlaarlo evo alternatives.

Who should use this?

Robotics engineers tuning evo rlcs on SO101 or AgileX setups, especially for manipulation tasks like insertion or transfer. Ideal for researchers benchmarking rle vs evo icl in real-world evo flick rl, or teams iterating RL baselines with rlg evo 10-style loops.

Verdict

Grab it if you have the hardware—CLI simplicity shines for evo rl prototyping—but with 79 stars and 1.0% credibility score, it's early; await models/datasets per GitHub release notes. Solid niche tool, not production-ready yet.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.