zeroc00I

Reverse proxy for Claude Code that anonymizes sensitive pentest data (IPs, hashes, credentials, hostnames, PII) before it reaches Anthropic. Dual-layer detection: local Ollama LLM + regex safety net, with per-engagement vault and self-improving feedback loop.

208
24
100% credibility
Found Apr 15, 2026 at 28 stars 5x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

A tool that intercepts and anonymizes sensitive penetration testing data before it reaches an AI service, replacing it with safe fakes and restoring originals for the user.

How It Works

1
🔍 Find Privacy Shield

You discover a helpful tool that protects client secrets while using smart AI for security testing.

2
Pick Setup Style
💻
Local Quick Start

Get it running on your own machine in moments.

☁️
Remote Secure

Set up safely in the cloud and connect securely.

3
📋 Start Client Session

Name this session for your specific client to keep all protections organized and separate.

4
🛡️ Activate Protection

Turn on the shield and link your AI helper – now sensitive info is automatically hidden from the AI.

5
🔬 Run Tests Safely

Chat with the AI and perform security scans; it works with fake stand-ins, but you see the real details.

🎉 Secure Testing Done

Complete powerful security work with AI smarts, confident no client data ever left your control.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 28 to 208 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is LLM-anonymization?

This Python-based reverse proxy acts as a transparent shield for Claude Code during pentests, using LLM anonymization to scrub sensitive data like IPs, hashes, credentials, hostnames, and PII before it hits Anthropic's API. Built with FastAPI, it layers Ollama for contextual detection (think bare hostnames or org names) on top of regex for IPs, CIDRs, and tokens, storing mappings in a per-engagement SQLite vault. You run Claude Code unchanged, get real data back deanonymized, and Anthropic sees only fake surrogates—solving the nightmare of leaking client data to cloud LLMs.

Why is it gaining traction?

It stands out with dual-layer detection for near-100% coverage on real pentest outputs (nmap, CrackMapExec, mimikatz), plus Docker and VPS setups via SSH tunnels for easy reverse proxy deployment without local installs. The self-improving feedback loop auto-fixes leaks from test fixtures, and CLI tools like new-engagement.sh isolate vaults per client. Developers dig the zero-leak integration tests and seamless workflow—no more manual redaction.

Who should use this?

Pentest engineers and red teamers running Claude Code on live client engagements with real IPs, creds, and PII. Ideal for those scripting bash tools or grepping logs in AI chats, needing LLM data anonymization without switching providers. Skip if you're not handling NDAs or prefer fully local AI.

Verdict

Promising early tool for LLM anonymization in high-stakes pentests, with solid docs, CLI-driven setups (reverse proxy Docker, VPS tunnels), and rigorous tests—but only 13 stars and 1.0% credibility score scream "prototype." Try the Docker quickstart if you're paranoid about data leaks; contribute fixtures to mature it.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.