jtydhr88

A ComfyUI plugin that wraps [Kimodo](https://github.com/nv-tlabs/kimodo) — NVIDIA's kinematic motion diffusion model for generating high-quality 3D human and humanoid robot motions from text prompts with optional kinematic constraints.

48
1
100% credibility
Found Apr 13, 2026 at 45 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A ComfyUI plugin wrapping NVIDIA's Kimodo for generating high-quality 3D human and robot motions from text prompts with kinematic constraints.

How It Works

1
🔍 Discover the motion maker

You find ComfyUI-Kimodo, a fun plugin that turns words into lifelike dances and robot moves right in your ComfyUI workspace.

2
📥 Add the plugin

Simply copy it into your ComfyUI folder and install a few helpers with one easy command.

3
🔄 Restart ComfyUI

Refresh your setup, and new Kimodo tools appear ready to use.

4
🤖 Pick your character model

Choose a human or robot body – it grabs the smart brains automatically so your assistant can create realistic moves.

5
💭 Describe the action

Type simple words like 'a person waves hello then sits down' to guide the motion.

6
Generate the dance

Hit generate, and watch your words come alive as smooth, natural movements appear.

7
👀 Preview and save

See a stick-figure animation, tweak if needed, and export to video or animation files.

🎉 Your motion is ready!

Import into your favorite software and bring characters to life with perfect, text-guided animations.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 45 to 48 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ComfyUI-Kimodo?

ComfyUI-Kimodo is a Python plugin that brings NVIDIA's Kimodo text-to-motion model into ComfyUI workflows, letting you generate realistic 3D human or robot animations from natural language prompts with optional kinematic constraints like pose keyframes or paths. Drop it into your ComfyUI custom_nodes folder via git clone from the comfyui github repository, install dependencies, and get nodes for model loading, prompt encoding, sampling, foot-skate cleanup, 2D/3D previews, and exports to NPZ, BVH, or Mixamo-rigged FBX. Models auto-download from Hugging Face on first use, fitting seamlessly into ComfyUI's node-based pipelines.

Why is it gaining traction?

It stands out by wrapping a cutting-edge diffusion model into ComfyUI's ecosystem—popular via comfyanonymous' comfyui github repo—enabling motion gen without leaving your graph. Easy comfyui plugin install via the comfyui github manager or portable releases works on Mac, AMD GPUs, and even Houdini/GIMP-inspired workflows, with multi-prompt chaining for smooth transitions and batch sampling. Developers grab it for quick text-driven animation prototyping, bypassing standalone CLIs.

Who should use this?

Animation pipeline builders extending ComfyUI for game dev or VFX, robotics sim engineers needing Unitree G1 motions, or 3D artists chaining text prompts into Blender/Maya via BVH/FBX exports. Ideal for comfyui plugin development fans tired of manual retargeting, or teams using comfyui plugin list for Krita/Photoshop-like node extensions.

Verdict

Solid early pick for ComfyUI users—grab from comfyui github releases or manager—but at 45 stars and 1.0% credibility score, expect rough edges like optional C++ builds for post-processing. Check the README for gated HF model access; mature enough for prototypes, not production yet.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.