teamchong

O(n) streaming JSON parser for LLM tool calls. Agents act sooner, abort bad outputs early. WASM SIMD, up to 2000× faster than stock AI SDK parsers.

12
0
100% credibility
Found Feb 20, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
JavaScript
AI Summary

VectorJSON is a high-performance streaming JSON parser designed for efficiently handling tool calls and large payloads from language models in real-time.

How It Works

1
🔍 Discover fast message reader

You learn about a clever tool that reads incoming AI messages piece by piece without waiting for the whole thing.

2
📦 Add to your project

You easily bring the tool into your app so it's ready to use.

3
⚙️ Set up a live reader

You create a smart reader that watches for message parts as they arrive.

4
🚀 Feed in message chunks

As pieces of the AI response trickle in, your reader grabs them one by one and builds the picture instantly.

5
Watch details appear live

Fields like tool names or code snippets show up right away, letting you preview and react without delay.

6
🎯 React to key parts early

You spot important info early, skip unwanted parts, or even stop early if something's off to save time.

App responds super quick

Your AI helper acts lightning-fast, users love the smooth experience, and you save on unnecessary waits.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 12 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is vectorjson?

VectorJSON is a JavaScript npm package delivering an O(n) streaming JSON parser optimized for LLM tool calls. It processes chunks from AI streams like Vercel AI SDK or Anthropic outputs, yielding live objects you can access instantly without re-parsing the full buffer. Agents act sooner on fields like tool names, stream code into editors char-by-char, and abort bad outputs early to save tokens.

Why is it gaining traction?

Stock AI SDK parsers re-scan growing buffers on every chunk, hitting O(n²) slowdowns—6 seconds on 100KB payloads—while VectorJSON scans each byte once for up to 2000x faster performance via WASM SIMD. Users get event-driven callbacks on specific paths, schema-driven skipping of unused fields, and deep equality checks with zero allocations. Early abort on wrong tool names or malformed JSON stops streams mid-response, freeing main threads instantly.

Who should use this?

LLM agent developers building UIs that react to tool calls, like streaming file edits or search queries into editors. Frontend teams integrating OpenAI/Anthropic streams who hit parsing lag in Vercel or TanStack AI SDKs. Anyone handling high-volume JSONL from embeddings or mixed LLM outputs with prose and code fences.

Verdict

Try it if streaming LLM JSON blocks your agent—benchmarks crush alternatives, docs pack runnable examples, and Zod integration shines. With 10 stars and 1.0% credibility score, it's early but battle-tested on 100MB payloads; watch for adoption as AI SDKs lag on speed.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.