hwdsl2

Deploy a complete, self-hosted AI stack on your own server with one command. Includes Ollama (LLM), LiteLLM (AI gateway), Whisper (STT), Kokoro (TTS), Embeddings (RAG), and MCP Gateway. Most services run locally; LiteLLM optionally routes to external providers. Supports NVIDIA GPU (CUDA) acceleration.

10
0
100% credibility
Found May 08, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Shell
AI Summary

A set of easy-to-launch bundles for running a complete private AI system at home, with options for chat, voice, search, and tools.

How It Works

1
🌐 Discover home AI

You hear about a simple kit that lets you run powerful AI tools like chat, voice recognition, and smart search right on your own computer, all private and secure.

2
📥 Get the setup

Download the ready-made package of files that bundle everything together nicely.

3
🚀 Launch your AI world

With one easy go command, start up the whole collection of AI helpers—they connect and configure themselves automatically.

4
🧠 Add a smart brain

Choose a clever AI model and load it in so your assistant can think, chat, and understand like magic.

5
Quick health check

Run a simple test to confirm all your AI friends are awake, happy, and ready to help.

6
Pick your adventure
💬
Just chat

Enjoy private conversations like a personal ChatGPT on your machine.

🎤
Voice magic

Speak, get answers spoken back—full voice assistant experience.

📚
Smart search

Ask questions about your documents with accurate, context-aware replies.

🔧
Coding helper

Give your AI access to files and web for building and fixing code.

🎉 Your private AI shines

Now you have a fully working, local AI suite—chat, speak, search securely without sending data anywhere else.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is docker-ai-stack?

Docker AI Stack deploys a full self-hosted AI setup on your Linux server via one Docker Compose command, bundling Ollama for local LLMs, LiteLLM as a unified gateway to 100+ providers, Whisper for speech-to-text, Kokoro for natural TTS, embeddings for RAG, and MCP Gateway for tools like filesystem and GitHub access. Everything runs locally for privacy, with auto-generated API keys and optional routing to external APIs. GPU acceleration kicks in seamlessly on NVIDIA hardware, and lightweight stacks start from 2.5GB RAM.

Why is it gaining traction?

Zero-config magic: services interconnect automatically, health checks via a single script, and one command gets you a production-ready local AI docker stack—no YAML tweaking needed. Modular stacks for chat, RAG, voice pipelines, or tools mean you deploy only what fits, with easy updates preserving data. Devs love the OpenAI-compatible APIs on standard ports, making it a drop-in for clients like Cline or Cursor.

Who should use this?

Self-hosting engineers deploying github repo to server for private AI inference. Indie devs prototyping voice agents or RAG apps locally without cloud bills. Teams needing quick docker compose ai stack on edge servers, especially with GPU acceleration for faster command responses.

Verdict

Worth cloning for instant local AI—thorough multilingual docs and backup guides make it dev-friendly despite 10 stars and 1.0% credibility score signaling early maturity. Test on non-prod first; it's a smart starter for docker gen ai stack needs.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.