roli-lpci

Dual-layer memory for AI agents. Compressed index + vector store. 91% recall, 70ms, fully local.

22
1
100% credibility
Found Mar 13, 2026 at 21 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

zer0dex is a local, token-efficient memory system for AI agents that pairs a compact human-readable index with a vector store for superior recall and cross-referencing.

How It Works

1
🔍 Discover zer0dex

You hear about a smart way to give your AI companion a lasting memory that remembers details across many chats.

2
📥 Get it ready

You download the tool and prepare your computer with the simple helpers it needs.

3
📝 Make your memory outline

You create a short, easy-to-read note summarizing the main topics and facts your AI knows, like a table of contents.

4
💾 Fill the memory bank

You add your notes, chat logs, and important details so the AI can find them quickly when needed.

5
🚀 Turn on the memory helper

You start a background service that keeps your AI's memories warm and ready to use in seconds.

6
🔗 Link it to your AI

You add a simple connection so every time someone chats with your AI, it grabs the right memories first.

🎉 Perfect recall every time

Now your AI remembers conversations, projects, and connections effortlessly, feeling smart and personal across sessions.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 21 to 22 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is zer0dex?

zer0dex delivers dual-layer memory for Python-based AI agents, pairing a compressed index file with a local vector store to enable persistent recall across sessions. It tackles weak retrieval in flat files and poor cross-referencing in plain RAG by auto-injecting relevant facts via a lightweight HTTP server—91% recall at 70ms latency, fully local with zero cloud costs. Developers get a simple CLI for seeding from Markdown, querying memories, and running an API for pre-message hooks.

Why is it gaining traction?

It crushes baselines in head-to-head evals: 91% recall and 80% cross-reference accuracy versus 52% for flat files and 80% for full RAG, all while staying under 900 tokens overhead. The hook is effortless integration—spin up a server, hook it into any agent pipeline for automatic context injection on every message, no LLM paging complexity like MemGPT. Plus, a built-in evaluation suite lets you benchmark your own memories instantly.

Who should use this?

AI agent builders running local Ollama setups who need reliable long-term memory without vendor lock-in, like indie devs prototyping persistent chatbots or researchers tracking experiment logs. Ideal for backend teams integrating dual-layer memory into frameworks like OpenClaw, where cross-domain queries matter more than raw speed.

Verdict

Worth a spin for local agent memory if you're on Python and Ollama—strong evals and dead-simple API make it a smart baseline upgrade. But with 16 stars and 1.0% credibility score, it's alpha-stage; run the eval suite on your data before committing.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.