chandika

AI now sees plausible fakes of API keys, PII, credentials. Rust. Sub-millisecond.

18
0
100% credibility
Found Feb 22, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

Mirage Proxy is a lightweight local tool that intercepts messages to AI services, swaps sensitive personal details and secrets with realistic fakes, and swaps them back in replies to prevent leaks.

How It Works

1
🔍 Discover the protector

You hear about Mirage Proxy when you're worried your AI helpers might accidentally see real passwords or emails from your files.

2
📥 Get it set up

You download and install the protector easily, choosing to build it yourself for trust or using a quick helper tool.

3
🛡️ Turn on the shield

With one command, you activate the background protector that watches all your AI chats and keeps real info safe.

4
💬 Chat with AI tools

Open your favorite AI coding helper or chat app, and everything flows normally but safely through the protector.

5
📊 Watch it catch secrets

You glance at the live log and smile seeing fake emails and keys replacing the real ones invisibly.

6
🔧 Toggle anytime

Use simple words like 'on' or 'off' to control it per chat window, or fully remove if needed.

AI magic without worry

Now you brainstorm code and ideas freely, knowing your private info stays locked on your machine forever.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is mirage-proxy?

Mirage-proxy runs as a local daemon proxy between your LLM tools and APIs like Anthropic or OpenAI, scanning requests for secrets, GitHub tokens, emails, phones, and PII, then swapping them with format-matching fakes—like turning AKIA... into another valid-looking AWS key. Real data never leaves your machine; responses get originals restored via session-consistent mappings. Built in Rust for sub-millisecond latency, it auto-routes 28+ providers and installs via cargo or brew with shell integration for tools like Cursor, Aider, or Claude Code.

Why is it gaining traction?

Unlike tools that insert obvious [REDACTED] tokens—prompting models to complain or adapt—mirage-proxy's invisible sensitive replacements keep LLMs oblivious, preserving natural behavior. Zero-config service install sets env vars like ANTHROPIC_BASE_URL to localhost:8686, supports SSE streaming, and offers dry-run mode plus an encrypted vault for persistent mappings. At ~5MB with <1ms overhead, it crushes bloated alternatives like LLM Guard or LiteLLM+Presidio in speed and size.

Who should use this?

Backend devs building AI agents that ingest repo context full of keys, or Cursor/Aider users tired of leaked GitHub tokens in prompts. Anyone routing codebase prompts to cloud LLMs via OpenClaw, Continue, or raw OpenAI calls, especially if sandboxing falls short on network-layer leaks.

Verdict

Grab it if you're proxying LLM traffic with secrets in play—docs are solid, MIT-licensed, and service rollback is one command away. With 11 stars and 1.0% credibility, it's early alpha; test dry-run first before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.