wz0919

wz0919 / AnchorWeave

Public

Official implementation of AnchorWeave: World-Consistent Video Generation with Retrieved Local Spatial Memories

80
8
100% credibility
Found Feb 18, 2026 at 23 stars 3x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

AnchorWeave is an AI framework that generates camera-controllable videos with strong spatial consistency by using local geometric memories from retrieved anchor clips.

How It Works

1
🔍 Discover AnchorWeave

You find this cool tool online that helps create smooth videos where scenes stay perfectly consistent as the camera moves.

2
📥 Grab the ready tools

Download the pre-made video creator and place it in a folder on your computer.

3
🖼️ Pick your starting picture

Choose a photo of a scene and write a simple description of the video you dream up.

4
🎥 Add helper clips

Select a few short reference videos from similar angles to guide the scene's layout.

5
Generate the magic

Hit go and watch as it weaves everything together into a fluid, realistic video that holds the world steady.

6
👀 Preview your creation

Play back the video to see how perfectly the spaces match across every frame.

🎉 Share your masterpiece

Export and share your lifelike, consistent video that feels like real footage.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 23 to 80 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is AnchorWeave?

AnchorWeave is the official GitHub repository for a memory-augmented video generation model that creates world-consistent long videos from an input image and camera trajectory. It pulls in retrieved local spatial memories—short anchor clips—to guide generation, dodging errors from global 3D reconstructions that plague other camera-controllable tools. Developers get Python scripts for training on datasets like RealEstate10K and inference via a Diffusers pipeline, outputting MP4s with strong spatial fidelity.

Why is it gaining traction?

Unlike baselines that fuse noisy multi-view geometry, AnchorWeave weaves clean local anchors with coverage-driven retrieval and multi-anchor control, yielding sharper long-horizon consistency without quality drops. The hook: plug-and-play CogVideoX integration, CLI demos for quick tests, and ablation-ready code mirroring the paper. Early adopters praise the camera pose handling for realistic trajectories.

Who should use this?

ML engineers fine-tuning I2V models for AR/VR sims or robotics viz, where camera motion demands persistent scenes. Video researchers reproducing the AnchorWeave paper or extending to custom datasets with poses and masks. Teams ditching inconsistent global NeRFs for local-memory speed.

Verdict

Grab it if you're prototyping controllable video gen—solid Diffusers base and official implementation make repros straightforward. But with 19 stars and 1.0% credibility score, it's raw: docs are README-only, no tests, so budget time for data prep tweaks.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.