r0b0tlab

Filesystem-first LLM-Wiki + Obsidian + Hermes Agent memory system. Markdown source of truth, SQLite FTS5 search, secret scanning, tier-based memory. Built for local LLM setups.

11
2
100% credibility
Found May 14, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A local, editable note-taking system designed to store and retrieve memories for AI agents using plain text files with built-in search and safety checks.

How It Works

1
🔍 Discover the memory helper

You hear about this simple tool that helps AI assistants remember conversations and notes like a personal wiki.

2
📥 Set it up on your computer

You download the files and prepare it with easy instructions, no fancy skills needed.

3
📁 Create your knowledge folder

You make a cozy folder on your computer to store all your notes and memories.

4
✏️ Fill it with your thoughts

You add everyday notes, project ideas, chat summaries, and important facts into plain text files you can edit anytime.

5
🔧 Prepare the smart finder

You run a quick setup so the folder knows how to find things super fast.

6
🧠 Ask and get perfect matches

You type a question and instantly see the best notes and memories that match, ready to use.

🎉 AI remembers everything

Your AI helper now pulls from your full knowledge to give smarter, more helpful answers every time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llm-wiki_obsidian_hermes_r0b0tlabbra1n?

r0b0tlabbra1n is a filesystem-first LLM-wiki memory system built in Python for local Hermes agents and other CLI tools. It treats Markdown in an Obsidian vault as the source of truth, delivering SQLite FTS5 search, tier-based retrieval, and secret scanning to compound knowledge from sessions without relying on chat history or fragile RAG. Users get a CLI like `brain init --vault ~/my-brain`, `brain search "query" --context`, and `brain ingest-sessions` to pull in Hermes chats securely.

Why is it gaining traction?

It stands out by making agent memory fully auditable and human-editable—lint vaults for links and secrets, rebuild indexes deterministically, and run gold-query evals for retrieval quality. Hermes integration shines with skill packs, cron heartbeats for lint/index/drift checks, and an MCP JSON facade for easy agent calls. Developers hook on the local, deterministic search that weights by tier, recency, and confidence for smarter context packets.

Who should use this?

Hermes users running local LLMs who need persistent, searchable memory for projects, procedures, and session summaries. Obsidian fans wanting to ingest agent outputs into wikilink graphs without cloud dependencies. Solo AI builders auditing knowledge drift and promoting facts across memory tiers.

Verdict

At 11 stars and 1.0% credibility, this v0.1 Python project punches above its weight with solid docs, pytest coverage, and built-in quality gates—but it's pre-vector embeddings and needs more real-world stress. Try it if local agent memory fits; quickstart works, CLI delivers.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.