Boof-Pack

A local proxy that strips web pages down to clean text before they enter your AI agent's context window. 704K tokens → 2.6K tokens. No LLM required.

28
2
100% credibility
Found Mar 26, 2026 at 28 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A local tool that fetches web pages, removes ads, navigation, and scripts to deliver clean text to AI agents, reducing data size by 86-99%.

How It Works

1
🔍 Find the Token Helper

You hear about a simple tool that cleans up cluttered web pages so your AI chat doesn't waste space on ads and menus.

2
📥 Bring it home

Download the folder to your computer and run the one-click setup to get everything ready.

3
🚀 Turn it on

Start the background helper with a simple go, and it waits quietly to clean web pages.

4
🔗 Link it to your AI

Tell your favorite AI app like Claude or Cursor where the helper lives, so it uses it automatically.

5
Magic cleaning happens

Now when your AI grabs info from websites, it gets super clean text only – goodbye junk, hello savings!

6
📊 Check your wins

Peek at the handy stats to see huge cuts in page size, like 99% less clutter.

Smarter AI chats

Your AI thinks faster with less waste, saving you time and money on every web lookup.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 28 to 28 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is token-enhancer?

Token Enhancer is a Python local proxy server that intercepts web fetches for your AI agents, stripping pages to bare text and slashing context tokens—Yahoo Finance drops from 704k to 2.6k, no LLM or GPU needed. It solves the waste of loading raw HTML full of ads, scripts, and nav into token-limited models. Run it standalone on localhost:8080 with /fetch or /batch endpoints, integrate as an MCP server for Claude Desktop/Cursor, or wrap as a LangChain tool.

Why is it gaining traction?

Massive 86-99% reductions via smart cleaning and caching make it a no-brainer local proxy for development, unlike cloud services needing keys or quotas. Batch processing, prompt refinement that guards tickers/dates/negations, and stats tracking hook devs optimizing agent costs. Zero-setup install.sh and live tests show tangible savings fast.

Who should use this?

Backend devs building AI agents for stock research or news scraping, LangChain users fighting web bloat, Cursor/Claude Desktop folks automating web tools. Perfect for prompt engineers in token-constrained local github copilot alternative workflows or local proxy server for testing setups.

Verdict

Worth cloning for any web-fed AI project—excellent README, tests, and MCP plug-and-play outweigh the 1.0% credibility from 28 stars and early maturity. Expect roadmap items like Playwright fallback to mature it further, but it token enhances agents right now.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.