hong-labs

hong-labs / HoRD

Public

Robust humanoid motion control via history-conditioned reinforcement learning and online distillation.

19
1
100% credibility
Found Mar 12, 2026 at 11 stars 7x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

HoRD is a research codebase for training humanoid robots to imitate human motions from datasets using reinforcement learning in physics simulators.

How It Works

1
🔍 Discover HoRD

You find this exciting project that teaches humanoid robots to copy real human movements from video data.

2
🛠️ Set up your workspace

You prepare a simple space on your computer to start training your robot's brain.

3
📥 Download motion examples

You grab ready-to-use collections of human movements to teach your robot.

4
🚀 Train basic tracking

You launch the first training round and watch your robot begin matching simple body poses.

5
Build advanced imitation

You run the next training phase to make movements smoother and more lifelike.

6
🎮 Test and play motions

You try out different human actions and see your robot perform them in simulation.

Lifelike robot motions

Your humanoid now fluidly imitates complex human movements like walking, dancing, or reaching.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is HoRD?

HoRD trains robust humanoid robot controllers to imitate complex motions from datasets like AMASS, using history-conditioned reinforcement learning and online distillation for reliable performance on hardware like Unitree G1 or H1. Developers download processed data and pretrained checkpoints from Hugging Face, then run two-stage Python scripts in IsaacLab or Genesis simulators to generate policies that handle compliant terrain or uneven surfaces. It emphasizes robust humanoid locomotion via sequential stepping and motion retargeting, skipping weeks of data prep.

Why is it gaining traction?

Unlike scattered RL repos, HoRD bundles PPO, AMP, and masked mimic agents with domain randomization for emergent robust autonomy, plus SLURM scripts for cluster training. Prebuilt configs for full-body tracking and evaluation metrics like cartesian error make iteration fast, and HF integration means you train or eval in minutes without custom pipelines. It's a practical robust humanoid platform bridging sim-to-real gaps others ignore.

Who should use this?

Robotics PhD students replicating robust humanoid contact planning papers, RL engineers at labs tuning Unitree G1 for locomotion on rough terrain, or humanoid teams needing quick motion imitation baselines beyond basic walking. Ideal for researchers extending deep RL to expressive retargeting without rebuilding envs from scratch.

Verdict

Try it if you're in humanoid RL—solid docs, HF assets, and multi-sim support make it usable despite 19 stars and 1.0% credibility score signaling early maturity. Polish tests and add more robots to hit prime time.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.