tashisleepy

The bridge between human-readable wikis and machine-speed memory. Built on Karpathys LLM Wiki pattern + Memvid.

17
3
100% credibility
Found Apr 06, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Knowledge Engine bridges a human-readable wiki with a fast machine-searchable memory to organize and query documents efficiently.

How It Works

1
πŸ” Discover Knowledge Engine

You find a helpful tool that turns your scattered notes and documents into an easy-to-browse wiki and lightning-fast search memory.

2
πŸ“₯ Set up your base

Download it to your computer and create a new folder for your personal knowledge collection with a quick start command.

3
πŸ“„ Add your files

Pick a document like meeting notes or a proposal, choose a project name, and add it to your collection.

4
✨ See the magic happen

Your file instantly becomes a neat wiki page for reading and gets stored in fast memory for quick lookups.

5
πŸ”Ž Search for answers

Type a question about your info, like 'what's the budget plan', and get spot-on results with links back to sources.

6
πŸ“Š View your dashboard

Open a simple web page on your computer to browse pages, check stats, and ensure everything stays in sync.

πŸŽ‰ Smart knowledge ready

Your documents now live as an organized, growing wiki that's instantly searchable, saving you hours of digging.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is knowledge-engine?

Knowledge-engine is a Python tool that ingests documents like proposals, notes, and PDFs into a dual-layer system: an Obsidian-compatible markdown wiki for humans to browse and a Memvid-powered memory layer for sub-5ms semantic searches. Built on Karpathy's LLM Wiki pattern, it solves knowledge decay by creating structured, compounding wikis from raw sources while keeping both layers synced via atomic operations. Users get CLI commands like `ingest`, `search`, `sync`, and `drift` checks, plus a local web UI for dashboard stats, entity views, and health reports.

Why is it gaining traction?

It stands out by ditching vector DBs for plain markdown grep (fast up to 500 pages) with optional Memvid for scale, avoiding infrastructure hassles in knowledge engineering in AI. The bridge detects drift, extracts entities/tags automatically, and supports dual-layer search that prioritizes curated wiki results. Developers hook on the 60-second quickstart and cross-platform file locking for reliable multi-user use.

Who should use this?

Consultants managing client briefs and meeting notes, knowledge engineers building AI agent memory, or solo researchers turning docs into queryable wikis. Ideal for teams needing Obsidian browsing plus agent-speed retrieval without servers, like in retail AI projects or cross-client knowledge bases.

Verdict

Try it for lightweight knowledge engineering if you're under 1K docsβ€”solid docs and demo data make onboarding easy, despite 17 stars and 1.0% credibility signaling early maturity. Pair with LLMs for full compounding; skip if you need production-scale vector search.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.