mlpc-ucsd

mlpc-ucsd / PixARMesh

Public

(CVPR 2026) PixARMesh: Autoregressive Mesh-Native Single-View Scene Reconstruction

46
3
100% credibility
Found Mar 14, 2026 at 46 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

PixARMesh is a research tool for generating detailed 3D mesh models of entire scenes and objects directly from a single photograph.

How It Works

1
🔍 Discover PixARMesh

You stumble upon this exciting tool that turns a single photo of a room into a full 3D model made of detailed shapes.

2
🚀 Set up easily

Grab a ready-to-go container that sets up your workspace without any hassle, like unpacking a magic kit.

3
📥 Collect scene photos

Download example room images, depth maps, and shape files to prepare your data playground.

4
Pick your adventure
Use ready model

Load a pre-trained brain to instantly rebuild scenes from photos.

📚
Train your own

Feed it data in stages to learn and create even better 3D worlds.

5
Watch 3D emerge

Hit go and see objects and entire rooms pop into lifelike 3D meshes right before your eyes.

6
📊 Review your creations

Check how spot-on the shapes and positions match real life with simple scores.

🎉 3D magic unlocked

Celebrate as you explore interactive 3D scenes from single snapshots, ready for design, games, or sharing.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 46 to 46 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is PixARMesh?

PixARMesh reconstructs full 3D scenes—complete with object meshes and poses—from a single image using an autoregressive, mesh-native approach in Python with PyTorch. Developers download datasets like 3D-FRONT, train via two-stage configs with Accelerate (layout then full sequence), or run inference/evaluation on pretrained Hugging Face models. Output: clean PLY meshes and composed GLB scenes, skipping bulky volumetric intermediates for direct mesh generation.

Why is it gaining traction?

Unlike implicit or NeRF-based recon that demands post-processing, PixARMesh spits out editable meshes autoregressively, integrating image/PC conditioning via DINOv2 and EdgeRunner/BPT tokenizers. Among cvpr 2026 papers github repos and github cvpr 2026 projects, its Docker-ready setup, HF pretrained weights, and scripts for obj/scene eval (Chamfer, F-score) make experimentation fast—ideal for cvpr 2026 reddit discussions on autoregressive 3D. Low overhead on multi-GPU via launch.py hooks it for quick baselines.

Who should use this?

3D vision researchers prototyping single-view scene recon for AR/VR apps, especially those eyeing cvpr 2026 deadline or workshops with autoregressive models. Teams fine-tuning on custom indoor datasets like 3D-FRONT for robotics sims or game asset gen. CV devs bridging cvpr 2024/2025 papers github to practical mesh pipelines, skipping Gaussian splats.

Verdict

Grab it for cutting-edge autoregressive recon if you're in 3D CV—pretrained models and eval scripts deliver immediate value despite 46 stars and 1.0% credibility score signaling early maturity. Docs are solid via project page/arXiv, but expect tweaks for prod; watch for cvpr 2026 review updates.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.