cvg

cvg / YoNoSplat

Public

[ICLR'26] YoNoSplat: You Only Need One Model for Feedforward 3D Gaussian Splatting

132
4
100% credibility
Found Feb 23, 2026 at 53 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Placeholder GitHub repository for an academic research project introducing YoNoSplat, a method for fast 3D scene creation from images using a single model, with code release scheduled for 2026.

How It Works

1
🔍 Discover YoNoSplat

You hear about a exciting new project that lets you create lifelike 3D scenes from everyday photos using just one smart helper.

2
🌐 Visit the Project Home

You head to the project's cozy spot on GitHub to learn more about it.

3
👥 Meet the Team

You read about the friendly group of researchers who dreamed this up and see their profiles.

4
📄 Explore the Story

You dive into the project page and paper to understand how it makes 3D magic happen faster and simpler.

5
Check Readiness

You notice the hands-on tools to play with it yourself will be ready by February 2026.

🎉 Get Excited for Launch

You're thrilled and ready to build stunning 3D worlds from your photos once everything is set up.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 53 to 132 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is YoNoSplat?

YoNoSplat introduces a single model for feedforward 3D Gaussian splatting, tackling the inefficiency of per-scene optimization in novel view synthesis. You feed it sparse images, and it spits out renderable 3D Gaussians in one pass—no training loops or iterative fitting needed. Backed by an ICLR'26 paper from ETH Zurich researchers, it promises real-time performance across diverse scenes with just one unified model.

Why is it gaining traction?

It ditches the multi-stage pipelines of standard Gaussian splatting methods, delivering instant results from a feedforward model that generalizes without scene-specific tweaks. Developers dig the "you only need one model" hook for slashing compute in AR/VR pipelines or robotics sims. Early arXiv buzz and 45 stars signal hype in the hot Gaussian splatting space.

Who should use this?

Computer vision researchers benchmarking radiance fields against NeRFs. Graphics engineers building real-time 3D recon for mobile AR apps. Teams optimizing novel view synthesis in robotics or content creation workflows.

Verdict

Hold off—code lands February 2026, so 1.0% credibility score and 45 stars reflect pre-release status with barebones docs. Solid paper foundation makes it worth starring now for future feedforward Gaussian wins.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.