3DSceneAgent

Create your own 3D scene with words anywhere.

31
0
100% credibility
Found Feb 24, 2026 at 23 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Vibe3DScene is a chat-based AI system that builds interactive 3D scenes in Blender from natural language descriptions.

How It Works

1
🌐 Discover Vibe3DScene

You find a fun tool online that lets you create 3D scenes just by chatting about what you want.

2
💬 Start chatting

Open the web page, pick a new conversation, and describe your dream scene like 'a cozy room with a table, lamp, and books'.

3
Watch it build

The friendly AI adds objects one by one, shows you pictures from different angles, and asks if it looks right.

4
🔧 Make tweaks

Tell it to move things around, change sizes, or add more details, and see updates in real time.

5
Check it looks perfect

Review the final views to make sure everything matches your vision.

🎉 Your scene is ready

Download your complete 3D scene to use anywhere, feeling proud of what you created with just words.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 23 to 31 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Vibe3DScene?

Vibe3DScene lets you build 3D scenes in Blender just by chatting—describe objects, layouts, or vibes in natural language, and a Python-based AI agent assembles them using text-to-3D generation, asset retrieval from PolyHaven/Sketchfab/Objaverse, and procedural tools like Infinigen. It runs headless via Docker for serverless deploys or pairs with a local Blender GUI, outputting renders, GLBs, or .blend files anywhere from web/mobile/CLI. Perfect for creating your own 3D map or texture pack without manual modeling.

Why is it gaining traction?

It stands out by wrapping Blender's Python API in a scalable MCP server, so you chat to import assets, render views, undo snapshots, or generate models from Rodin/Trellis2/Hunyuan— no IDE needed. Docker Compose handles multi-worker scaling with Redis/Nginx, and the React frontend streams live GLTF previews plus todo tracking. Devs dig the "render-and-verify" loop that auto-fixes bad outputs, like creating your own AI agent for quick prototypes.

Who should use this?

AI researchers prototyping vision-language 3D workflows, game devs spinning up Minecraft-like scenes or custom skins via text, and indie creators building product viz or AR assets without 3D expertise. Ideal if you're deploying your own MCP server or Docker image for collaborative scene editing, like teams iterating on websites with embedded 3D.

Verdict

Grab it for experiments if you have API keys for VLMs like GPT-4o/Claude—early traction (17 stars) shows promise, but 1.0% credibility flags active dev with bugs/breaks. Solid docs and Docker make it playable now; fork and contribute to mature it.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.