KevinXuxuxu

Anonymization proxy between your local environment and LLM providers

13
1
100% credibility
Found Apr 26, 2026 at 13 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This tool acts as a local privacy guard for AI chats with services like Anthropic's Claude, hiding personal details in outgoing messages and restoring them in responses so sensitive data never reaches the AI provider.

How It Works

1
🔍 Discover privacy worries

You want to chat with AI helpers like Claude but worry about names, emails, or phone numbers accidentally shared.

2
💡 Find the privacy shield

You come across this handy tool that keeps your personal details safe right on your computer.

3
📥 Get it ready

Download and prepare the tool on your machine—it sets up smoothly with everyday computer steps.

4
▶️ Turn on the shield

Start the protector with one easy command, and it quietly watches over your chats.

5
🔗 Connect your AI chat

Point your favorite AI chat app to use the shield instead—it forwards everything securely.

6
💬 Chat freely and safely

Type your messages with real details; the shield hides them before sending and reveals them only for you.

🛡️ Enjoy total privacy

Your conversations stay private—personal info never leaves your device, giving you peace of mind.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 13 to 13 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is anon_proxy?

Anon_proxy is a Python-based anonymization proxy that sits between your local apps and LLM APIs like Anthropic's Claude, masking PII such as names, emails, and phones before requests leave your machine. It runs the OpenAI privacy-filter model locally via Torch and Transformers, replacing sensitive data with stable placeholders, then unmasking them in responses for seamless use. Fire up the server with `uv run python -m anon_proxy.server` on port 8080, set your client's base URL to it, and your auth headers pass through unchanged—ideal for anon proxy free setups without sending raw data upstream.

Why is it gaining traction?

It delivers transparent PII anonymization software github-style privacy for LLM chats and tools, supporting streaming responses, tool calls, and file contents without breaking SDKs like Claude Code. Custom regex patterns and chunk sizes let you tune for missed entities like SSNs or IPs, outperforming basic regex proxies by handling context-aware detection locally. Debug mode logs masked diffs to stderr, making it easy to verify anon proxy online behavior without vendor-side logging risks.

Who should use this?

AI engineers building local LLM agents with sensitive user data, like customer support bots processing emails or notes. Devs using Anthropic SDKs in tools like Claude Code who need data-anonymization github compliance without rewriting prompts. Privacy-focused teams evaluating anonymous proxy residential alternatives to tor anonymization proxy for production workflows.

Verdict

Try it if you're on Anthropic and care about PII leakage—solid docs, test scripts, and Anthropic adapter make setup fast despite 13 stars and 1.0% credibility score. Still early; lacks OpenAI/multi-provider support and Apple Silicon optimization, so pair with monitoring until more battle-tested.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.