anzal1

⚡ LLM-powered knowledge compiler — turn documents into a confidence-scored, temporal knowledge base with interactive dashboard

16
2
100% credibility
Found Apr 10, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Quicky Wiki uses AI to extract verifiable claims from documents or URLs, assigns confidence scores to them, tracks changes over time, and provides an interactive dashboard for querying and visualization.

How It Works

1
🔍 Discover Quicky Wiki

You stumble upon a handy tool that promises to turn your scattered notes and articles into a smart, self-updating personal encyclopedia.

2
🚀 Set up your collection

With a quick start, you name your new knowledge hub and connect a thinking helper to make it smart.

3
📄 Feed in your stuff

You add research papers, web articles, or personal notes, and it reads them to pull out key ideas.

4
🧠 Watch facts come alive

It figures out each important statement, rates how reliable it is, and flags any clashes between them.

5
📊 Explore your dashboard

You open a colorful map to zoom into connections, browse facts, or chat questions to your collection.

Own your wisdom

Your facts now have trust scores, evolve over time, and help you discover gaps or new insights effortlessly.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is quicky-wiki?

Quicky Wiki is a TypeScript CLI tool that compiles documents or URLs into an LLM-powered knowledge base, extracting verifiable claims with confidence scores and building a temporal knowledge graph. Drop PDFs, markdown, or links into a raw folder, run `qw ingest` to process them via providers like OpenAI or Gemini (auto-detected from env vars), and get a living wiki that tracks reinforcements, challenges, and decay over time. Fire up `qw serve` for an interactive dashboard with graph viz, chat queries, timelines, and health reports.

Why is it gaining traction?

Unlike static wikis or basic RAG setups, it surfaces contradictions, applies confidence decay, and runs "metabolism" like red-teaming high-confidence claims or resurfacing stale ones—making your LLM-powered knowledge graph evolve realistically. Multi-format exports to Obsidian markdown, Anki flashcards, Marp slides, or D3 graphs add instant value, while MCP server integration hooks into LLM-powered autonomous agents. Zero-config init and batch ingest from dirs keep it dead simple for building LLM-powered applications on GitHub.

Who should use this?

Researchers coding LLM-powered knowledge graph construction from cyber threat intelligence or drug reviews will love turning reports into queryable, confidence-scored bases with causal relations. Analysts building LLM-powered knowledge graphs for enterprise intelligence can ingest docs for graph reasoning and knowledge discovery. Solo devs prototyping LLM-powered knowledge bases need its dashboard for quick iteration without boilerplate.

Verdict

Try it for personal research wikis or LLM prototypes—solid CLI and dashboard punch above 16 stars, but 1.0% credibility signals early maturity with thin tests and docs. Great base for LLM-powered graph projects if you tolerate rough edges.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.