0nandon

[CVPR 2026] Official code of "EmbodiedSplat: Online Feed-Forward Semantic 3DGS for Open-Vocabulary 3D Scene Understanding"

19
0
100% credibility
Found Mar 06, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

A placeholder repository for an academic research project that reconstructs interactive 3D scenes with object labels from real-time video streams, awaiting full code release.

How It Works

1
📰 Discover EmbodiedSplat

You hear about a cool new way to turn everyday videos into interactive 3D models of rooms and objects.

2
🌐 Visit the project page

You land on the GitHub page and see a stunning teaser image showing a living room rebuilt in 3D.

3
🔥 Get excited by the promise

You learn it processes streaming photos in real-time to create labeled 3D scenes for things like object detection and new views.

4
📖 Dive into the details

You check out the research paper and project website to understand how it works for scene understanding.

5
Star and stay tuned

You mark the page as a favorite to get notified when the tools are ready to download and try yourself.

🎉 Build your 3D worlds

Once available, you capture videos and instantly get vivid 3D models with smart labels on everything.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is EmbodiedSplat?

EmbodiedSplat takes streaming images from a mobile device and builds online, feed-forward semantic 3D Gaussian Splats for open-vocabulary scene understanding at 5-6 FPS per frame. It delivers whole-scene reconstructions that power tasks like 3D semantic segmentation, 2D-rendered segmentation, novel-view color synthesis, and depth rendering—ideal for embodiedsplat personalized real to sim to real navigation with gaussian splats from a mobile device. Built on PyTorch and Lightning, it targets real-time perception without offline training.

Why is it gaining traction?

As the official CVPR 2026 GitHub repo—fresh off acceptance like buzzing CVPR 2024 papers GitHub and CVPR 2025 GitHub projects—it's drawing eyes from CVPR 2026 Reddit and CVPR 2026 review watchers ahead of CVPR 2026 dates and deadlines. The hook is its speed for open-vocab 3DGS on consumer hardware, outpacing slower alternatives in dynamic environments, with a project page teasing demos that beat CVPR 2023 GitHub baselines.

Who should use this?

CV researchers iterating on CVPR 2026 submissions or CVPR 2026 template setups, robotics engineers prototyping real-to-sim navigation stacks, and embodied AI devs needing fast semantic maps from phone cams for AR/VR apps or autonomous drones.

Verdict

Promising CVPR 2026 GitHub watchlist pick with CVPR poster GitHub and CVPR rebuttal GitHub potential, but 1.0% credibility score, 19 stars, and zero code released (TODO pending June) mean bookmark it—don't integrate yet. Solid docs via project page keep it on radar for CVPR 2026 workshops.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.