quan-meng / seen2scene
PublicSeen2Scene takes an incomplete real-world 3D scan and generates a complete, coherent 3D scene using visibility-guided flow matching β trained directly on real-world data.
Seen2Scene is a research method that completes partial real-world 3D scans into full, realistic scenes by learning directly from incomplete data.
How It Works
You come across this exciting project on GitHub while looking for ways to fill in missing parts of real-world 3D scans.
You play the YouTube video and see partial 3D rooms magically turn into full, realistic scenes before your eyes.
The big preview picture shows off stunning before-and-after examples of cluttered real rooms completed perfectly.
You skim the short summary to understand how it learns from everyday incomplete 3D scans to make them whole.
You click over to the full website to see more examples, details, and possibly try it out yourself.
Now you know how to create coherent, lifelike 3D environments from partial views, ready for your own creative projects.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.