xiaobiaodu

[ICLR 2026] Mobile-GS: Real-time Gaussian Splatting for Mobile Devices

49
8
100% credibility
Found Mar 15, 2026 at 49 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Mobile-GS is an open-source project for creating compact 3D scene models from photos that render in real-time on mobile phones using Gaussian Splatting techniques.

How It Works

1
🔍 Discover Mobile-GS

You hear about a cool way to turn your photos into smooth 3D scenes that play perfectly on your phone.

2
📸 Gather your photos

Take pictures of a place or object from different angles, like circling around it.

3
🗺️ Map your scene

Feed the photos into the tool to build a simple 3D blueprint of what you captured.

4
🎓 Train the lightweight model

Let it learn a quick starter version of your 3D scene that runs fast.

5
Tune for phone speed

Refine it so your scene spins and zooms buttery smooth right on your mobile screen.

6
👁️ Render and view

Watch your creation come alive in real-time, just like a video game on your phone.

🎉 Enjoy your 3D world

Share or explore your lifelike 3D scene anytime, anywhere on your device!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 49 to 49 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Mobile-GS?

Mobile-GS brings real-time 3D Gaussian Splatting to mobile devices, training compact radiance field models from COLMAP scenes that render at high FPS on phones. Using Python and PyTorch with CUDA, it handles datasets like Mip-NeRF 360, Tanks & Temples, and Deep Blending via a two-stage pipeline: pre-training with importance sampling for outdoor/indoor scenes, then fine-tuning with teacher distillation and vector quantization compression. Users get CLI tools to pretrain, fine-tune (with optional multi-view), render/decode compressed .xz files, and compute PSNR/SSIM/LPIPS metrics—output matches full PLY renders.

Why is it gaining traction?

This github iclr 2026 leak stands out for squeezing Gaussian Splats into mobile-friendly sizes while keeping quality, with Apache 2.0 allowing commercial use. The hook is real-time mobile rendering (mobile gscore eu benchmarks incoming), plus easy setup on CUDA 12.1/Python 3.11 and integration with Mini-Splatting/OMG/MVGS. Amid iclr 2026 openreview hype and reddit buzz on iclr 2026 papers, devs chase its compression for AR/VR without cloud dependency.

Who should use this?

CV researchers testing iclr 2026 workshops on mobile radiance fields, AR engineers building phone-based 3D reconstruction from COLMAP, and graphics teams deploying real-time novel view synthesis on Android/iOS. Ideal for iclr github reviewer leakage experiments or iclr 2026 deadline prototypes needing quick mobile eval.

Verdict

Promising for iclr 2026 accepted papers and github iclr 2026 traction, but 45 stars and 1.0% credibility signal early maturity—docs are clear, setup reliable, yet lacks broad tests. Fork and benchmark your scenes now; production waits for more polish.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.