Tencent-Hunyuan

HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds

86
4
100% credibility
Found Apr 16, 2026 at 285 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

HY-World 2.0 is an open-source AI system that reconstructs editable 3D worlds from photos or videos and generates new ones from text or single images.

How It Works

1
🔍 Discover 3D Magic

You stumble upon HY-World while browsing cool AI demos and get excited about turning your photos into interactive 3D worlds.

2
📥 Get It Ready

Download the free tool and set it up on your computer in a few minutes—no coding needed.

3
📱 Upload Your Photos

Drag and drop your images or a short video of a scene you want to explore in 3D.

4
✨ Watch It Rebuild

Hit 'Reconstruct' and see the AI instantly create a detailed 3D version of your scene with points, depths, and cameras.

5
đź§­ Dive In and Play

Spin around, zoom, and walk through your new 3D world right in the browser viewer.

🎉 Export Your World

Download your 3D model to use in games, editing software, or share with friends—your scene is now forever playable!

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 285 to 86 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is HY-World-2.0?

HY-World 2.0 is a Python-based multi-modal world model that reconstructs, generates, and simulates editable 3D worlds from text, single images, multi-view photos, or videos. Users get real 3D assets like meshes and Gaussian splats—importable to Unity or Unreal—via a simple pipeline API or Gradio web demo. It solves the "video-only" limitation of prior models like HY World 1.5 on GitHub by outputting persistent, navigable scenes instead of fleeting clips.

Why is it gaining traction?

It stands out by delivering SOTA accuracy on reconstruction benchmarks (e.g., Tanks-and-Temples, 7-Scenes) while supporting flexible inputs and one-shot inference on consumer GPUs. Developers dig the diffusers-style API, auto-downloaded HF models, and interactive Gradio app for quick 3DGS/point cloud viz—far more practical than optimizing COLMAP pipelines. Partial open-sourcing of WorldMirror 2.0 inference hooks users now, with full generation code incoming.

Who should use this?

3D reconstruction researchers benchmarking against SEVA or Pow3R. Game devs prototyping AI-generated levels from prompts or scans. Robotics/AR engineers needing fast video-to-3D for simulation worlds.

Verdict

Grab WorldMirror 2.0 for reconstruction today—solid docs, CLI/multi-GPU support, and top benchmarks make it usable despite 86 stars and 1.0% credibility score. Hold for full HY-World generation; immaturity shows in pending components, but Tencent's track record suggests it'll mature fast.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.