yvgude

yvgude / lean-ctx

Public

Hybrid Context Optimizer — Shell Hook + MCP Server. Reduces LLM token consumption by 89-99%. Single Rust binary, zero dependencies.

37
3
69% credibility
Found Mar 24, 2026 at 37 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

lean-ctx is a command-line tool that compresses outputs from shell commands and file reads to drastically reduce token consumption when interacting with AI coding assistants.

How It Works

1
🔍 Discover lean-ctx

You hear about a handy helper that shrinks the info sent to your AI coding buddy, saving you lots of money on usage fees.

2
📥 Get it on your computer

Pick an easy way to add it, like a quick download from your software store or build tool.

3
⚙️ Set up shortcuts

Run a simple setup to make your usual terminal commands automatically shorter and smarter.

4
🔗 Link to your coding app

Tell your favorite AI editor to use this helper for reading files and running commands.

5
📉 Watch savings grow

As you work, everyday tasks like checking files or project status send tiny bits of info instead of huge dumps, cutting costs by up to 99%.

6
📊 Check your dashboard

Peek at colorful charts showing tokens saved, money spared, and top shortcuts working for you.

💰 Enjoy cheaper coding

Your AI assistant works faster with less waste, and you see real dollar savings piling up over time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 37 to 37 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is lean-ctx?

lean-ctx is a single Rust binary that slashes LLM token usage by 89-99% in AI coding setups via a hybrid shell hook and MCP server. The shell hook transparently compresses CLI outputs from 60+ commands like git status, docker ps, and cargo build, while the MCP server offers nine tools for editors—cached file reads in six modes (map for deps and APIs, signatures, entropy-filtered), search, shell exec, and cache management. It tracks savings with CLI dashboards, graphs, and a local web UI, plus Token Dense Dialect for even tighter LLM responses.

Why is it gaining traction?

It beats alternatives like RTK by combining shell hooks with MCP tools for deeper integration across Cursor, GitHub Copilot, Claude Code, and more, hitting higher savings on file reads and project context via hybrid context caching. Developers hook it in seconds with `lean-ctx init --global` for 23 auto-aliases, then see real USD cost cuts in `lean-ctx gain`. The precise tiktoken counting, session analytics, and tree-sitter parsing for ten languages make compression reliable without LLM tweaks.

Who should use this?

AI-assisted coders in Cursor or Copilot grinding TypeScript/Rust projects with frequent git diffs, npm builds, or file inspections. Terminal-heavy backend devs running docker/kubectl in Claude Code sessions, or anyone billing LLM API costs on medium repos where re-reading files burns tokens. Skip if you're CLI-light or locked to non-MCP editors.

Verdict

Grab it via cargo or Homebrew if you're in the ecosystem—polished docs and setup make the 37 stars and 0.7% credibility score forgivable for an early Rust binary. Maturity shows in broad editor support, but watch for edge cases in niche commands; test your workflow first.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.