TeleHuman

Official implementation of "HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control"

122
10
100% credibility
Found Mar 10, 2026 at 83 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This repository provides a simulation framework for training humanoid robots to skateboard using reinforcement learning and physics-based control.

How It Works

1
🤩 Discover Robot Skateboarding

You stumble upon an exciting video of a humanoid robot nimbly skateboarding, sparking your curiosity to try it yourself.

2
📥 Get the Project

Download the ready-to-use simulation world where robots learn tricks on a skateboard.

3
⚙️ Set Up Your Playground

Prepare your computer with a few simple steps to create the perfect training ground.

4
🚀 Launch Training

Hit start and watch as the robot begins practicing pushes, turns, and balances automatically.

5
🎉 See the Magic Happen

In moments, your robot starts landing smooth ollies and carving turns like a pro skater.

6
▶️ Play and Test

Take control with keyboard arrows to guide the robot's speed and direction in real-time.

🏆 Master Skateboarding

Celebrate as your humanoid robot flawlessly shreds the board, ready for videos or demos.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 83 to 122 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is humanoid_skateboarding?

This Python project delivers the official GitHub repository for HUSKY, a reinforcement learning system that trains humanoid robots like Unitree G1 to skateboard using physics-aware whole-body control. It solves dynamic balance and locomotion challenges by combining motion priors with RL policies, letting users train in high-parallel sims and evaluate via MuJoCo. Run `uv run train` for massive env batches or `uv run play` with checkpoints for instant playback.

Why is it gaining traction?

It stands out with AMP integration for expert-like motions on unstable boards, plus a lightweight MuJoCo viewer for keyboard-controlled tests—perfect for quick humanoid control experiments. Developers dig the official implementation's seamless uv setup, pretrained checkpoints, and paper-backed results showing robust skating. No fluff: scales to 4096 envs out of the box.

Who should use this?

Humanoid robotics engineers tuning legged locomotion for sim-to-real transfer, RL researchers pushing whole-body control on tricky tasks like skateboarding, or legged robot teams needing motion priors for balance-heavy demos.

Verdict

Grab it if you're into humanoid RL—solid docs, easy CLI, and arXiv paper make it extensible despite 47 stars and 1.0% credibility score. Early but promising for control baselines; test the ONNX exports first.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.