Tencent-Hunyuan

Official Implementation of OmniWeaving: Towards Unified Video Generation with Free-form Composition and Reasoning

89
5
100% credibility
Found Apr 03, 2026 at 89 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

OmniWeaving is a unified open-source model for generating and editing videos from text, images, multiple images, or existing videos with advanced reasoning capabilities.

How It Works

1
🔍 Discover OmniWeaving

You hear about a fun tool that turns words, photos, or clips into amazing videos.

2
📥 Get the video maker

Download the easy kit that includes everything you need to start creating.

3
🎨 Pick your creation style

Choose if you want a video from words, a photo that moves, or editing an existing clip.

4
Add your spark
📝
From words

Just describe your dream scene.

🖼️
From pictures

Upload one or more photos to bring to life.

🎥
Edit a clip

Pick a video and say how to change it.

5
🚀 Make it happen

Click go and watch as your ideas turn into smooth, lively videos.

🎉 Enjoy your video

Download or share your custom creation that matches exactly what you pictured.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 89 to 89 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is OmniWeaving?

OmniWeaving is the official implementation of a unified video generation model from Tencent Hunyuan, handling free-form composition and reasoning across tasks like text-to-video, image-to-video, keyframe interpolation, video editing, and multi-image scenes. Developers download models from Hugging Face and run inference via Python CLI commands (e.g., `--task t2v` or `--task reference2v`), producing high-fidelity videos from interleaved text, images, or videos. Built on Python with optimizations like Flash Attention, it unifies fragmented video workflows into one flexible pipeline.

Why is it gaining traction?

It excels at free-form composition—blending 2-4 reference images into dynamic scenes with precise subject control—plus reasoning mode that refines ambiguous prompts automatically. As the official GitHub repository with checkpoints, training data pipelines, and IntelligentVBench eval suite, it delivers reproducible SOTA among open-source unified models. Devs dig the task-specific flags and multi-GPU support for quick prototyping without custom hacks.

Who should use this?

ML engineers building video apps needing multimodal inputs, like reference-driven editing or compositional generation (e.g., inserting subjects into videos). Video researchers benchmarking free-form tasks, or content tool devs replacing brittle pipelines with one model for T2V/I2V/V2V.

Verdict

Worth forking for unified video gen experiments, but 89 stars and 1.0% credibility signal early maturity—solid official GitHub releases and docs help, though test coverage lags. Prototype now if free-form composition fits; otherwise, monitor for stability.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.