pheonix-delta

Run a <400ms latency Voice Agent on just 4GB VRAM. Fully offline, no API keys required. Optimized for GTX 1650 and edge robotics with zero-copy inference. (Apache 2.0)

66
2
100% credibility
Found Feb 07, 2026 at 23 stars 3x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

AXIOM is an offline voice assistant designed for robotics labs, offering real-time speech interaction, contextual conversations, knowledge retrieval, and interactive 3D equipment visualizations through a web interface.

How It Works

1
📰 Discover AXIOM

You hear about a friendly voice helper for robotics labs that answers questions about equipment and projects right on your computer.

2
📥 Bring it home

Download the free package to your computer and open the folder.

3
🔧 Prepare the brains

Link the voice thinking pieces so it can listen, understand, and speak naturally.

4
🚀 Wake it up

Click to start and watch your voice assistant come alive on your screen.

5
🎤 Start chatting

Click the microphone and ask about robot dogs, cameras, or project ideas.

6
👀 See and hear magic

Listen to clear spoken answers while cool 3D models spin and light up on screen.

7
💬 Keep talking naturally

Ask follow-up questions and it remembers what you talked about before.

✨ Your lab buddy is ready

Enjoy instant help with lab gear, ideas, and demos anytime, all offline.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 23 to 66 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is axiom-voice-agent?

Axiom is a Python-powered, fully offline voice agent that runs real-time conversations on 4GB VRAM hardware like the GTX 1650, hitting <400ms latency without API keys. Speak queries about robotics gear, get intent-aware responses from RAG knowledge bases, and see interactive 3D models spin up in a WebGL carousel—all via a local web interface. It solves edge robotics pain by delivering axiom-grade voice AI locally, like running GitHub Copilot locally for lab demos.

Why is it gaining traction?

Zero-copy inference cuts memory 94% and latency 2.4%, glued multi-turn context keeps chats natural, and dual correctors ensure clean TTS output. Devs dig the Apache 2.0 license, sub-2s E2E on consumer GPUs, and easy local runs—no cloud bills, just `python main_agent_web.py` for instant WebSocket voice. Benchmarks back the <400ms claim, making it a hook for edge fully offline agents.

Who should use this?

Robotics lab techs querying equipment specs hands-free, edge AI builders deploying on 4GB boards for drones or robots, or students mocking up voice interfaces without infra. Ideal if you're running GitHub workflows locally or need a voice agent for low-VRAM robotics prototypes.

Verdict

Promising for local voice AI with strong docs and Apache 2.0 freedom, but 38 stars and 1.0% credibility signal early maturity—expect tweaks for production. Grab it to run fully offline on your 1650 setup today.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.