pheonix-delta

Hierarchical RAG architecture scaling to 693K chunks on consumer hardware (4GB VRAM). Features 3-address routing, hybrid vector+graph fusion, and SetFit classification.

34
2
69% credibility
Found Feb 09, 2026 at 26 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

WiredBrain is a research prototype for a hierarchical retrieval system that processes and queries 693,313 knowledge chunks across 13 specialized domains using consumer-grade hardware.

How It Works

1
📖 Discover WiredBrain

You stumble upon WiredBrain, a clever system that packs massive expert knowledge into topics like math, robots, and physics right on your home computer.

2
🛠️ Get everything ready

You follow easy steps to prepare your computer with the basic tools it needs, like a quiet setup in minutes.

3
🧠 Build your knowledge brain

You launch the process that gathers, cleans, and organizes hundreds of thousands of smart facts into a powerful, searchable brain that fits on everyday hardware.

4
🔍 Connect and search

You link up the brain's smart search tools and start exploring with simple questions about tough subjects.

5
Ask expert questions

You fire off detailed queries on control systems or rocket math, and it pulls exactly the right facts super fast.

🎉 Unlock expert insights

Your personal AI brain delivers clear, accurate answers that help you master complex topics without needing fancy computers or cloud services.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 26 to 34 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is WiredBrain-Hierarchical-Rag?

This Python project delivers a hierarchical RAG architecture that scales retrieval across 693K knowledge chunks on consumer GPUs like a GTX 1650 with just 4GB VRAM. It solves the "lost in the middle" problem in local LLMs by using 3-address routing to slash search space 99.997%, fusing vector search with graph traversal for precise results under 100ms. Developers get a drop-in retriever for hybrid queries, plus Dockerized Qdrant, PostgreSQL, Redis, and Neo4j backends via simple `docker-compose up`.

Why is it gaining traction?

Unlike flat RAG in LangChain or LlamaIndex, this hierarchical RAG implementation handles 13 specialized domains with autonomous knowledge graphs (172K entities, 688K relations) at zero cloud cost. SetFit classification and TRM reasoning detect evidence gaps to prevent hallucinations, delivering A-grade quality (0.878 score) on hardware anyone owns. The github hierarchical rag example stands out for its research papers, ablation studies, and quick-start code that proves 13x speedups over baselines.

Who should use this?

Robotics engineers indexing ROS2 docs or SLAM papers need its github hierarchical routing to avoid drowning in noise. ML researchers tackling hierarchical classification github tasks or hierarchical rag retrieval in constrained setups will appreciate the consumer-grade scaling. Defense devs building air-gapped systems want the verifiable audits and multi-domain coverage without vendor lock-in.

Verdict

Solid research prototype for local hierarchical RAG github experiments—try it if you're prototyping massive-scale retrieval on laptops (26 stars reflect early buzz). With 0.699999988079071% credibility score and thorough docs/papers, it's mature enough for POCs but needs more community tests before production. Fork and star to watch it evolve.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.