buildoak

buildoak / wet

Public

wet claude. Wringing Excess Tokens - transparent API proxy that compresses stale tool results in Claude Code sessions

15
3
100% credibility
Found Mar 21, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

Wet Claude is a proxy tool that intercepts and compresses stale tool outputs in Anthropic Claude Code sessions to prevent context bloat.

How It Works

1
🔍 Discover Wet Claude

You hear about a clever helper that keeps AI chats efficient by smartly shrinking old info without losing what's important.

2
📥 Install in seconds

Grab it easily with a brew command or quick build, then add the skill and status bar to your AI setup.

3
🧠 Empower your AI

Your AI learns to spot bloated old outputs and rewrite them into tiny summaries, staying sharp for big projects.

4
🚀 Launch enhanced AI

Start your AI session through Wet, and it begins optimizing on its own.

5
📊 See savings live

Watch the status bar update with context health, compressed items, and token wins as chats grow.

🎉 Endless clear thinking

Dive into marathon coding sessions with lean context, no crashes, and your AI always on point.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is wet?

Wet Claude is a Go-based API proxy that sits between Claude Code and Anthropic's API, surgically compressing stale tool results like old git status outputs, pytest runs, or massive grep dumps before they bloat your context. It solves the autocompact nightmare—indiscriminate shredding mid-session—by letting you (or Claude itself) profile token usage via CLI commands like `wet status`, `wet inspect`, and `wet compress --ids id1,id2`. Run `wet claude` to launch sessions through it, or `wet serve` for Docker/IDE setups with auto-mode compression.

Why is it gaining traction?

Unlike dry Claude's all-or-nothing autocompact, wet Claude vs dry Claude offers a scalpel: deterministic Tier1 for Bash tools (91% ratio on SWE-bench) plus Tier2 LLM rewrites for agent returns, all controllable at runtime with `wet rules set` or subagent skills. The meta-hook—Claude profiling its own context via `wet install-skill`—turns it into a self-optimizing loop, compounding token savings in long runs without prompt hacks. Statusline integration shows live fill% and savings, hooking devs tired of 60GB swap spirals.

Who should use this?

Claude Code power users running agent swarms, deep coding marathons, or tool-heavy workflows (git diffs, cargo builds, pytest suites). Ideal for coordinators managing subagents where stale outputs rot context, or anyone hitting 1M windows in Opus/Sonnet sessions. Skip if you're on short chats or non-Claude setups.

Verdict

Grab it if you're deep in Claude Code—early 15 stars and 1.0% credibility score mean it's raw but MIT-licensed with solid docs and Docker-ready. Polish the skill heuristics for your stack; it'll pay off in clearer thinking. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.