ppgranger

Content-aware output compression for AI coding assistants. Replaces blind truncation with intelligent strategies per file type: structural summaries for code, schema extraction for configs, error-focused filtering for logs, and smart sampling for CSVs. Saves tokens while preserving what the model actually needs.

81
3
100% credibility
Found Feb 18, 2026 at 66 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Token-Saver is a plugin that smartly shortens detailed command outputs in AI coding tools like Claude Code and Gemini CLI to cut down on usage costs while keeping all vital details.

How It Works

1
📰 Discover Token-Saver

You hear about a smart helper that shrinks long lists from everyday commands in your AI chats, saving you money on usage.

2
📥 Set it up quickly

You download the tool and run one easy command to add it to your AI assistant.

3
Pick your AI friend
🤖
Claude Code

Set it up for Claude to make your coding chats super efficient.

🌟
Gemini CLI

Set it up for Gemini to streamline your command chats.

4
💬 Start your AI chat

Open your AI assistant and begin a new conversation as usual.

5
âš¡ Watch the magic

When your AI runs commands like file checks or tests, long outputs turn into short, clear summaries – all key details stay!

6
📊 Track your wins

See a quick note at chat starts showing total words saved this session and lifetime.

🎉 Save tons effortlessly

Your AI chats run smoother with way fewer costs, and you always get the important info fast.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 66 to 81 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is token-saving?

Token-saving is a Python extension for AI coding assistants like Claude Code and Gemini CLI that compresses verbose CLI outputs intelligently. Instead of blind truncation, it uses content-aware strategies: structural summaries for code files, schema extraction for configs, error-focused filtering for logs, and smart sampling for CSVs—saving tokens while keeping what the model actually needs, like git diffs, test failures, or build errors. Install via a simple CLI script targeting Claude, Gemini, or both.

Why is it gaining traction?

It beats generic token saving by specializing 15 command families—git status groups files by dir/status, pytest collapses passes but preserves full stack traces, npm audit groups vulns by severity—often hitting 60-70% compression without losing actionable info. Users get real-time stats ("Lifetime: 342 cmds, 1.2 MB saved (67.3%)") and config tweaks via JSON/env vars. Precision tests ensure no breakage, unlike crude truncation.

Who should use this?

DevOps engineers wrangling Docker/kubectl outputs, backend devs debugging pytest/cargo test runs, or full-stack teams linting with eslint/ruff amid git diffs and npm builds. Perfect for Claude Code or Gemini CLI workflows where token limits kill context on verbose commands.

Verdict

Grab it if you're on Claude/Gemini CLI and tired of token waste—excellent docs, installer, and 217 tests make it reliable despite 67 stars and 1.0% credibility marking it early-stage. Dev-mode symlinks let you test risk-free; savings tracking sells itself.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.