elara-labs

Claude re-reads your code every session. Make it stop. Save 70%+ on tokens. Local MCP server with AST indexing, hybrid search, and cross-session memory.

38
10
100% credibility
Found May 03, 2026 at 32 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A local indexing tool that enables AI coding assistants to retrieve relevant code snippets instead of entire files, reducing input token usage by up to 94% with benchmarked results.

How It Works

1
💡 Discover smarter coding help

You hear about a tool that lets AI assistants search your code smartly instead of reading everything, saving lots of money on usage.

2
📦 Add the tool to your computer

Install it quickly so it's ready to use with your favorite coding apps.

3
✨ Set it up in your project

Run a simple setup in your code folder and it automatically connects to your coding environment.

4
🔄 Refresh your coding app

Restart your editor and everything is linked up seamlessly.

5
🧠 AI searches like magic

Now your AI finds just the right code pieces instantly, without wasting time on full files.

6
📊 Check your savings dashboard

Open a quick view to see charts of how much cheaper and faster your sessions are.

🎉 Code smarter, spend less

Enjoy huge savings on AI costs while your assistant always knows your full project context.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 32 to 38 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is code-context-engine?

Code-context-engine is a Python tool that indexes your codebase with AST-aware chunking for semantic search, running a local MCP server so Claude and other AI agents query relevant code snippets instead of re-reading full files every session. It slashes input tokens by 70%+ (94% benchmarked on FastAPI), with hybrid vector/BM25 retrieval, graph expansion for related code, and cross-session memory for decisions. Run `cce init` to index, install git hooks, and auto-configure editors like Claude Code, Cursor, VS Code, or Gemini CLI.

Why is it gaining traction?

Unlike cloud indexers or editor-locked tools, it's zero-config, local-first, and multi-editor via Claude GitHub MCP integration, with a dashboard for live charts and `cce savings` CLI showing real dollar estimates from Anthropic pricing. Git hooks auto-reindex changes in <1s, output compression cuts reply tokens up to 75%, and reproducible benchmarks prove retrieval quality (90% recall@10). Developers love the privacy and measurable ROI on agentic context engineering code.

Who should use this?

Backend devs on Python/JS/TS projects using Claude Code or Cursor for refactoring, where token bills from full-file context eat budgets. Teams with shared repos needing cross-session recall of architecture decisions, or VS Code Copilot users tired of re-explaining codebases. Ideal for context engineering for code generation in mid-sized codebases (not monorepos yet).

Verdict

Try it if you're deep in Claude GitHub integrations--the benchmarks and CLI make token savings tangible, with strong docs despite 26 stars and 1.0% credibility signaling early days. Maturity lags (beta classifiers), so validate on side projects first; uninstall is clean.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.