memvector

MemVector: Local Vector API, Storage & Embedding Engine for PHP

10
0
100% credibility
Found Mar 04, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
C++
AI Summary

A PHP extension for high-speed local vector storage, similarity search, text embeddings, and reranking to enable semantic search and AI pipelines directly in PHP processes.

How It Works

1
💡 Dream of smart search

You want your app to instantly find similar ideas in your content, like matching stories about cats without exact words.

2
📦 Add the magic toolbox

You easily bring this speedy toolbox into your app setup so it can handle smart matching right inside.

3
Choose your starting point
📊
Use ready numbers

Bring in number codes you already have from other tools.

🧠
Grab a tiny brain

Download a small helper model that turns everyday words into those magic numbers automatically.

4
🗄️ Fill your treasure chest

You pack your texts into a super-fast storage chest that keeps everything organized and ready to match.

5
🔍 Ask and discover

You type a question, and it pulls up the closest matches in a flash, feeling like magic.

6
Polish the gems

Use a clever judge to double-check and pick the absolute best matches from your finds.

🎉 Blazing smart app!

Your app now zips through meanings privately and super-quickly, with no waiting or outside help – pure joy!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ext-memvector?

ext-memvector is a C++ PHP extension that delivers a local vector API, storage engine, and embedding system directly in your PHP process. PHP developers can embed text, store vectors with metadata, run similarity searches via HNSW indexes, and rerank results using cross-encoders—all powered by small GGUF models without external databases, APIs, or network latency. It supports memory, disk, or shared memory storage for fast, in-process AI workloads.

Why is it gaining traction?

It slashes RAG pipeline latency to 10-30ms by keeping everything local, crushing cloud embedding APIs (50-200ms+) on speed and cost—zero tokens billed, no rate limits. Persistent PHP runtimes like OpenSwoole or RoadRunner shine here, reusing loaded models across requests for single-digit ms queries on millions of vectors. Quantization options cut memory use dramatically while preserving accuracy.

Who should use this?

PHP backend engineers building semantic search in apps, APIs, or chatbots. Ideal for RAG setups where you index docs, embed queries on-the-fly, and rerank top candidates without spinning up vector DBs like Pinecone. Suits teams on OpenSwoole, RoadRunner, or FrankenPHP targeting low-latency, self-hosted AI.

Verdict

With 10 stars and 1.0% credibility, it's early-stage—build from source, test thoroughly—but excellent docs, examples, and benchmarks make it viable for prototypes. Grab it if local PHP vector search fits; skip for production without more community validation.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.