Jasonzzt

Cache-DiT Node for Comfyui

236
10
100% credibility
Found Feb 03, 2026 at 112 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A ComfyUI add-on that accelerates specific AI image and video generation models by reusing smart calculations across steps for 1.4-2x faster results.

How It Works

1
🔍 Discover Slower Creations

You're using ComfyUI to make beautiful AI images and videos, but notice they take too long to generate.

2
Find the Speed Booster

You hear about ComfyUI-CacheDiT, a simple add-on that makes your favorite AI models run 1.4-2x faster with no extra setup.

3
📥 Add It Easily

Download the files and drop them into your ComfyUI add-ons folder, then restart ComfyUI to see the new lightning bolt nodes.

4
Connect the Magic Node

Drag your AI model into the special accelerator node, tweak a switch to turn it on, and link it to your generation workflow – it auto-fits everything perfectly.

5
🎨 Build Your Creation

Set up your prompt, steps, and style just like always, now with the accelerator boosting your model underneath.

6
🚀 Hit Generate

Press go and watch as your images or videos appear much quicker, with a dashboard showing the speedup right in the logs.

🎉 Faster Art Magic

Celebrate your speedy results – same great quality, but now you create more in less time, ready for the next masterpiece!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 112 to 236 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ComfyUI-CacheDiT?

ComfyUI-CacheDiT is a Python custom node that plugs cache-dit acceleration into ComfyUI workflows, delivering 1.4-1.6x speedups for DiT models like Z-Image, Qwen-Image, LTX-2 video, and WAN2.2 without any manual tuning. Drop it between your model loader and KSampler—enable it, pick auto-detect or a preset, and it caches residuals across denoising steps to skip redundant computations. It's a zero-config fix for sluggish DiT inference in image and video generation pipelines.

Why is it gaining traction?

It stands out with one-click setup, model-specific nodes for video quirks like LTX-2 temporal consistency, and a performance dashboard that logs cache hits and speedups post-run. Unlike generic optimizers, it auto-fallbacks to lightweight caching for ComfyUI's non-standard architectures, plus easy disable toggles without restarts. Developers love the verified benchmarks (up to 2x on video) and YouTube tutorial for instant wins.

Who should use this?

ComfyUI power users grinding high-step DiT generations—think AI artists batching Z-Image or Qwen-Image outputs, video creators optimizing LTX-2 T2V/I2V pipelines, or researchers testing WAN2.2 MoE models. Skip if you're on low-step distilled runs (<10 steps) where warmup overhead eats gains.

Verdict

Grab it for DiT-heavy ComfyUI setups—220 stars and solid README/video make it production-ready despite the 1.0% credibility score signaling early maturity. Test on your models first; quality holds with defaults, but tweak warmup/skip for edge cases.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.