capitan01R

Architecture-aware LoRA loader for Z-Image Turbo (Lumina2) in ComfyUI. Fixes silent key mismatches by auto-fusing separate Q/K/V into Z-Image's fused QKV format and remapping output projections.

20
2
100% credibility
Found Mar 10, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A ComfyUI custom node that correctly loads and applies LoRA adapters to Z-Image Turbo models by adapting to their unique structure.

How It Works

1
😕 Struggling with add-ons

You're creating AI images but your special style enhancers aren't applying properly to your favorite picture generator.

2
🔍 Discover the fix

You find this handy tool that makes those enhancers work perfectly with that generator.

3
📥 Add it easily

Simply copy the files into your AI art tool's add-ons folder, and it's ready to go.

4
🧩 Place the magic loader

Drag the new 'Z-Image Turbo LoRA Loader' piece onto your creative canvas.

5
🔗 Connect and tweak

Link it to your base picture style, choose an enhancer from your collection, and slide the strength to just right.

6
🚀 Create enhanced art

Run your setup, and watch as your images come alive with the full power of your enhancers.

🎉 Perfect results

Enjoy stunning, customized AI artwork that looks exactly how you imagined.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 20 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Comfyui-ZiT-Lora-loader?

This Python node pack for ComfyUI delivers an architecture-aware LoRA loader built for Z-Image Turbo (Lumina2) models. It fixes silent key mismatches from generic loaders by auto-fusing separate Q/K/V weights into Z-Image's fused QKV format and remapping output projections. Clone into custom nodes, wire up your model and LoRA file with strength slider and toggle, and watch adapters apply fully—no more dropped weights.

Why is it gaining traction?

Generic ComfyUI loaders silently ignore most LoRA attention keys on Z-Image Turbo due to its non-standard fused format; this one nails exact mapping for diffusion_model, transformer, and lycoris prefixes. The stack node handles up to 10 LoRAs with individual strengths and per-slot Q/K/V fusing, saving workflow hassle. Users see stronger, more reliable adaptations without retraining or manual hacks.

Who should use this?

ComfyUI power users generating images with Lumina2 or Z-Image Turbo models, especially when stacking LoRAs from diffusers trainers. AI experimenters building turbo text-to-image pipelines who waste time debugging weak adaptations from mismatched formats.

Verdict

Recommended for Z-Image Turbo workflows—it's a targeted fix that just works. At 19 stars and 1.0% credibility, it's immature with no tests, but clean docs and zero deps make it low-risk to try.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.