xie-lab-ml

[ICML 2026] The official code for our work "Lightning Unified Video Editor via In-Context Sparse Attention".

19
0
100% credibility
Found May 11, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

An open-source AI tool for editing videos using natural language prompts, delivering top performance with efficient processing techniques.

How It Works

1
🖥️ Discover LIVE-EDITOR

You stumble upon a free tool on GitHub that promises easy AI-powered video edits just by describing changes.

2
📥 Download the tool

Grab the ready-to-use files and set everything up on your computer in moments.

3
🤖 Add the AI smarts

Download the special video-understanding brain from a trusted sharing site.

4
🎥 Pick your video

Choose a video from your files and type a simple instruction like 'add a crown on her head'.

5
Watch the magic

Hit go and see the AI swiftly transform your video exactly as described.

🎉 Enjoy your new video

Get a stunning, professional-looking edited video ready to share, faster than ever before.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Lightning-Unified-Video-Editor-via-In-Context-Sparse-Attention?

This Python repo delivers a fast, open-source video editor built on diffusion transformers that edits source videos via text prompts—like adding a crown to a character's head while preserving motion and structure. It tackles the O(S²) cost of full self-attention on long joint source+generated token sequences by using in-context sparse attention, slashing compute without retraining. Run `python inference.py --input video.mp4 --prompt "your edit" --output result.mp4` on a single RTX 4090 for results in 32 steps, with TileLang or Triton backends.

Why is it gaining traction?

It tops EditVerse benchmarks (e.g., 0.289 CLIP-T score) and runs 2.8x faster than FlashAttention-2 at 65K tokens, enabling practical video edits on consumer GPUs. Pluggable sparse kernels and 80-step fine-tuning make it drop-in for Wan 2.2 models, while matching ICML 2026 papers github quality with arXiv preprint and HF weights. Devs grab it for attention breakthroughs without dense compute walls.

Who should use this?

Video ML researchers prepping ICML 2026 submissions or rebuttals, prototyping apps like personalized ad edits or AR overlays. Devs at startups building text-driven video tools, especially those eyeing ICML 2026 workshops on efficient transformers. Skip if you're not on H100/RTX 4090 or need multi-GPU scale yet.

Verdict

Solid pick for fast video editing inference if you're into ICML 2026 github experiments—benchmarks and speed impress—but 19 stars and 1.0% credibility score signal early-stage; read the ICML 2026 paper first, test demos thoroughly before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.