jgravelle

jMRI v1.0 — open specification for token-efficient context retrieval in MCP servers. SDKs, reference server, benchmark.

22
5
100% credibility
Found Mar 11, 2026 at 21 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This repository provides an open specification, SDK clients, reference server, and benchmarks for a standardized interface enabling AI agents to efficiently search and retrieve specific code symbols or document sections from repositories.

How It Works

1
🔍 Discover jMRI

You hear about this open guide that helps AI assistants find exactly the right bits of code or docs without reading everything.

2
📦 Get the tools

You grab the simple helper programs that make searching and fetching super easy.

3
🏗️ Add your projects

You point the tools to your code folders or online projects, and they prepare a smart map once.

4
💭 Ask a question

You type a natural question like 'how does the login work?' and see matching spots with quick summaries.

5
🎯 Grab the details

You pick the best match and pull just the exact code or section you need.

6
📊 See your savings

Every time, you get a note on how much time and money your AI just saved by not loading the whole thing.

🚀 AI works smarter

Now your AI assistant answers questions precisely and affordably, making your work faster and cheaper.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 21 to 22 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is mcp-retrieval-spec?

This repo defines jMRI v1.0, an open specification for token-efficient context retrieval in MCP servers. It lets AI agents discover knowledge sources, search by intent for relevant code symbols or doc sections, retrieve exact snippets, and check metadata like tokens saved—slashing context window waste from 42k to under 500 tokens per query. Python SDKs, TypeScript client, reference server, and benchmark suite make it dead simple to add to your MCP setup.

Why is it gaining traction?

jMRI crushes naive full-file reads and chunk RAG with 1,979x token savings and 96% precision on FastAPI benchmarks, plus built-in cost tracking. MCP devs love the four-tool spec: plug in any compliant server, get summaries-first search to avoid bloating prompts. Integrations for Claude Code and Cursor mean instant wins without rewriting agents.

Who should use this?

Backend devs building MCP agents for code analysis, like querying FastAPI deps without loading the whole repo. AI toolmakers indexing docs or notebooks for precise retrieval. Early adopters tweaking Claude/Cursor for token budgets on large projects.

Verdict

Grab it if you're in MCP land and chasing token efficiency—the spec and Python SDKs are polished v1.0, benchmark proves the gains. Low 15 stars and 1.0% credibility scream early days, so test locally before prod; reference server needs a commercial license.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.