jackwener

Agent-native persistent knowledge management — compile knowledge once, query forever.

17
0
100% credibility
Found Apr 20, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

LLM Wiki is a tool for setting up a Markdown folder structure that AI agents use to compile, maintain, search, and analyze an interconnected personal knowledge base compatible with note-taking apps like Obsidian.

How It Works

1
🔍 Discover LLM Wiki

You learn about a handy way to create a personal knowledge collection that smart helpers can build and update for you.

2
📥 Install the tool

Put the free organizer on your computer so you can start building your knowledge spot.

3
📁 Pick your folder

Choose or make a folder on your computer where all your notes and ideas will live.

4
Set everything up

With one simple action, it creates all the folders, guides, and starting pages ready for your knowledge.

5
🤖 Add info with AI help

Chat with your AI buddy using easy instructions to save articles, answer questions, or explore topics.

6
🔍 Search and connect

Find what you need quickly and see how ideas link together as your collection grows smarter.

Explore your wisdom hub

Open your collection in a note app to browse linked pages, view connections, and enjoy organized insights forever.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llm-wiki?

llm-wiki is a TypeScript CLI tool for agent-native persistent knowledge management: compile knowledge once from raw sources into an interconnected Markdown wiki, then query forever. It sets up Obsidian-compatible vaults with AI agent skills for ingest, query, lint, and research operations—no LLM calls from the tool itself, just slash commands like `/ingest path` or `/query "distributed consensus"`. Pair it with optional DB9 for hybrid BM25 + vector search supporting llm wikipedia deutsch or english via CJK tokenization.

Why is it gaining traction?

Ditches one-shot RAG for compounding knowledge: agents auto-maintain wiki pages with wikilinks, sources immutable, log append-only. CLI extras like `llm-wiki graph` reveal communities, hubs, orphans; `search` fuses keyword + semantic results via RRF. llm-wiki-skill bootstraps Claude/Codex instantly, no setup beyond `init`.

Who should use this?

AI researchers building llm wikipedia datasets or wikidata llm graphs. Devs curating domain wikis like llm wiki law or llm wikipedia rag pipelines. Teams needing persistent LLM knowledge management for project docs, with Obsidian for human edits.

Verdict

Solid early bet at 17 stars and 1.0% credibility—mature docs, CLI polish, Vitest coverage, but expect tweaks as agent-native workflows evolve. Install globally and `init` a vault if persistent wiki beats transient chat memory.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.