chappyasel

A self-improving LLM knowledge base about self-improving LLM knowledge bases

10
1
100% credibility
Found Apr 07, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

An open-source compiler that turns curated sources into a self-improving markdown wiki on LLM knowledge systems, agent memory, context engineering, and self-improvement.

How It Works

1
🌟 Discover meta-kb

You find this handy project that turns notes and links into a smart, growing wiki about AI helpers.

2
🍴 Make it yours

Copy the project to your space and change one easy setting to focus on your own topic, like recipes or gadgets.

3
📚 Gather your info

Share website links or quick notes, and it smartly pulls in details without you lifting a finger.

4
Build the magic wiki

Press go once, and it weaves everything into neat articles, maps, and connections that make sense.

5
📖 Dive in and explore

Wander through your new wiki full of clear explanations, pictures, and smart links between ideas.

🎉 Your expert guide is ready

Celebrate having a personal, always-improving knowledge treasure that grows with new info.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is meta-kb?

meta-kb is a TypeScript/Bun toolkit that ingests sources from GitHub repos, arXiv papers, Twitter threads, and articles, then compiles them into a markdown wiki using LLMs. It generates synthesis articles like "The State of LLM Knowledge Bases," project reference cards, and an interactive knowledge graph—demoing on self-improving LLM agents and knowledge systems. Fork it once, tweak a config file for your topic, run `bun run ingest ` and `bun run compile`, and get a self-updating wiki with auto-verified claims.

Why is it gaining traction?

The self-improving loop—extracting claims, verifying against sources, and auto-fixing errors—sets it apart from static RAG or manual wikis, echoing Karpathy's LLM-compiled markdown idea but looped back on itself. Dual compilation (deterministic scripts or agent skills) ensures consistency, while deep research clones repos to analyze 15-25 key files for architecture breakdowns. Neutral scoring treats all projects equally, appealing to devs hunting signal in self-improving LLM agent GitHub repos.

Who should use this?

Agent builders mapping ecosystems like self-improving LLMs or context engineering. Researchers surveying ML papers or security trackers. Devs forking for startup playbooks, language ecosystems, or open-source directories—anyone curating 100+ sources into scannable landscapes without endless note-taking.

Verdict

Promising forkable template for LLM-powered wikis, but 1.0% credibility and 10 stars signal early days—docs are solid, but expect iteration on incremental recompiles and community contributions. Try for your niche if manual curation drains you; skip for production until GitHub Actions land.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.