jasonmayes

Client side vector search using EmbeddingGemma with Web AI (LiteRT.js, TensorFlow.js, and Transformers.js)

10
1
100% credibility
Found Mar 20, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
JavaScript
AI Summary

VectorSearch.js is a JavaScript library that provides fast, private semantic search over large text collections directly in the web browser, complete with visualizations of the underlying AI processes.

How It Works

1
🔍 Discover the tool

You hear about a magical browser gadget that searches your own notes and texts by their true meaning, keeping everything private on your device.

2
🎮 Try the live demo

Jump into the ready-to-play online example to see it instantly find matching ideas in sample sentences.

3
📥 Gather the pieces

Pick up the special understanding file and helper bits needed to power the search on your own webpage.

4
Add to your page

Drop a simple script line into your webpage, and watch the search superpower come alive right in the browser.

5
💾 Save your writings

Paste in your paragraphs or documents, name your collection, and let it store them safely for quick recall.

6
🔎 Ask a question

Type what you're hunting for, tweak how picky it should be, and hit go to search your collection.

🎉 See smart matches

Instantly get back the closest matching texts with fun colorful visuals showing the thinking process, all super speedy and private.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is VectorSearch.js?

VectorSearch.js is a JavaScript library for client-side semantic vector search directly in the browser, embedding texts with Google's EmbeddingGemma model and querying millions of vectors in milliseconds via cosine similarity. It stores embeddings persistently in IndexedDB as named databases, keeping everything private and offline—no servers, zero costs beyond your hardware. Powered by WebGPU acceleration through LiteRT.js, TensorFlow.js, and Transformers.js, it delivers low-latency results across client GitHub Linux, Windows, Mac, Ubuntu setups.

Why is it gaining traction?

It stands out for fully client-side execution, dodging server latency and privacy leaks common in cloud vector DBs, while hitting GPU speeds on Intel, NVIDIA, AMD, or Apple silicon. Developers dig the ready-to-run Codepen demo, simple API for storing/querying texts with thresholds, and bonus visualizations of tokens and embeddings. The hook? Instant prototyping of local RAG without backend hassle, plus cross-browser WebGPU support.

Who should use this?

Frontend devs building offline semantic search in web apps, like personal knowledge bases or client-side chat tools. Ideal for client-side rendering heavy sites needing quick text matching, or prototyping RAG in PWAs on GitHub client MacOS/Windows/Linux. Suited for roles handling client-side encryption or exception logging with embedded search.

Verdict

With just 10 stars and 1.0% credibility score, it's raw and experimental—docs are solid via README and demo, but lacks tests or broad validation. Grab it for client-side EmbeddingGemma proofs-of-concept if WebGPU fits your stack; otherwise, wait for maturity.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.