aeromomo

🦞 Claw Compactor — The 98% Crusher. Cut your AI agent token spend in half with 5 layered compression techniques.

1,201
102
100% credibility
Found Feb 12, 2026 at 329 stars 4x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Claw Compactor is a collection of Python scripts that apply multiple layers of rule-based compression to markdown memory files and AI session transcripts in OpenClaw workspaces to reduce their size.

How It Works

1
📖 Discover Claw Compactor

You hear about a handy helper that tidies up your AI assistant's notes and chat histories to save space without losing key details.

2
💾 Bring it home

Download the simple folder and place it next to your AI workspace folder on your computer.

3
🔍 Preview the magic

Run a quick check on your workspace to see exactly how much smaller everything could get.

4
Compress everything

With one easy command, it smartly shrinks all your notes, memories, and chat logs while keeping all the important facts safe.

5
📉 See the savings

Check your updated files – they're now much smaller, so your AI uses less resources and costs less to run.

6
🔄 Set it on autopilot

Add it to your routine so it tidies up automatically every week or so.

🎉 AI feels faster

Your assistant works quicker with cleaner notes, and you love the ongoing savings on time and money.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 329 to 1,201 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claw-compactor?

Claw-compactor is a Python 3.9+ toolkit that compresses AI agent workspaces, slashing token counts in memory markdown files and session JSONL transcripts by 50%+ using five deterministic layers like deduplication, dictionary encoding, and observation extraction. It targets OpenClaw setups but works anywhere with verbose context, delivering one-command runs via `mem_compress.py full` for mostly lossless savings—no LLM calls needed. Users get tiered summaries, token benchmarks, and artifacts like codebooks that travel with compressed files.

Why is it gaining traction?

It hooks developers with zero-cost, rule-based compression that previews savings via `benchmark` before touching files, plus 97% reductions on transcripts through structured observation pulls. CJK-aware token estimation and full reversibility stand out over LLM-dependent tools, while commands like `observe` and `dict` let you mix layers. The compactor's focus on workspace paths, IPs, and enums via RLE delivers practical wins without retraining models.

Who should use this?

AI agent maintainers on OpenClaw hitting token budgets in memory dirs or session logs. Ops teams automating weekly context cleanup via cron. Devs building long-context LLM apps needing deterministic token trimming without quality loss.

Verdict

Grab it for token audits if you're on OpenClaw—strong README and 800+ passing tests show polish despite 0 stars and 0% credibility score. Still early; benchmark your workspace first before full deploys.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.