AstraBert

Fully-local search engine with Liteparse, transformers.js and Qdrant Edge

18
0
100% credibility
Found Mar 30, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

litesearch is a local tool for processing documents into semantic searchable chunks and querying them without any external services.

How It Works

1
📚 Discover litesearch

You hear about a simple tool that lets you search inside your own documents using smart matching, all privately on your computer without needing the internet.

2
🔧 Set it up

Follow a couple of easy steps to get the tool ready to use right from your computer's command area.

3
📥 Add your files

Pick a document like a PDF or report, and watch it get broken into helpful pieces and stored safely in a private spot on your machine.

4
Choose search style
Quick question

Type what you're looking for and instantly see the best matching parts.

🖥️
Friendly menu

Open an interactive screen to select files, ask questions, and tweak settings step by step.

5
See matches appear

Relevant snippets from your documents pop up, ranked by how closely they match your question, complete with handy confidence scores.

🎉 Private search ready

You now have your own speedy, secure way to dig up exactly what you need from your files anytime.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is litesearch?

Litesearch is a fully-local semantic search engine built in TypeScript that lets you ingest documents like PDFs or Office files, then query them for relevant chunks without any cloud services. It parses files with LiteParse, generates embeddings via transformers.js, and stores everything in a local Qdrant Edge instance—CLI commands like `ingest ` and `retrieve ` handle the heavy lifting, with a TUI for interactive use. Developers get offline, private vector search right from their terminal, persisting data in a simple local directory.

Why is it gaining traction?

It stands out by running entirely on your machine with Bun and a Rust-backed Qdrant Edge for fast, mmap-persisted storage—no API keys, no latency, no vendor lock-in compared to hosted tools like Pinecone or Weaviate. The hook is dead-simple setup for semantic search on personal docs, with options like score thresholds, file filtering, and custom chunk sizes that deliver cosine-similarity results instantly. Early adopters dig the privacy edge for prototyping RAG apps locally.

Who should use this?

Solo developers building offline AI tools, security-focused analysts searching sensitive reports, or indie hackers prototyping document QA without infra costs. Ideal for teams handling proprietary PDFs where cloud embeddings are a non-starter, or anyone testing litesearch pipelines before scaling.

Verdict

Try it for quick local semantic search experiments—solid docs and Bun ease make the 18 stars and 1.0% credibility score forgivable for an early project, but expect tweaks for production scale. Great starter for edge-based, fully-local setups.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.