okdalto

ComfyUI custom node implementation of VideoMaMa for video matting with mask conditioning.

35
2
100% credibility
Found Feb 08, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Custom nodes for ComfyUI that provide video matting capabilities using mask conditioning to extract objects from videos.

How It Works

1
🔍 Discover VideoMaMa

You find this helpful tool for cleanly pulling objects out of videos in your AI video workspace.

2
📥 Add the Tools

Place the new video tools into your workspace folder and refresh to see them appear.

3
🧠 Smart Files Download

The thinking parts needed for video magic download on their own the first time you try.

4
🎬 Load Your Video

Bring in your video clip and rough outlines of the object you want to separate.

5
🖱️ Point and Mark

Click on the video frame to show the tool exactly which object to track and what to ignore – super intuitive!

6
Create Smooth Masks

Hit go and watch as it generates perfect outlines that follow your object smoothly through every frame.

Clean Video Ready

Celebrate having your object perfectly isolated in video form, ready for editing or layering anywhere.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 35 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ComfyUI-VideoMaMa?

ComfyUI-VideoMaMa ports VideoMaMa's generative video matting into ComfyUI custom nodes, turning rough input masks and video frames into precise alpha mattes for object extraction. Built in Python with PyTorch and Diffusers, it leverages a Stable Video Diffusion base plus a fine-tuned UNet, auto-downloading models on first run. Users get loader, inference, and optional SAM2 mask generator nodes that plug into comfyui custom workflows for seamless video processing.

Why is it gaining traction?

It skips manual model hunts with comfyui custom model path auto-downloads and avoids comfyui custom nodes import failed headaches via clean git clone installs from comfyui custom nodes github. The killer hook: an interactive point selector UI in the browser for SAM2 masks—click positives/negatives on frames, no external tools. Handles resolution scaling and motion buckets intuitively, fitting right into comfyui github examples without nodes conflicts.

Who should use this?

ComfyUI power users building video compositing pipelines, like chaining VHS loaders to mattes for AI VFX. Indie game devs prototyping character extractions from footage. Motion graphics artists tired of rotoscoping who want mask conditioning in comfyui custom scripts.

Verdict

Worth adding to your comfyui custom nodes manager for video matting experiments—thorough README with workflows beats most peers. Low 23 stars and 1.0% credibility score signal early maturity, so test on small clips first amid VRAM demands.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.