vurakit

vurakit / agentveil

Public

Your code contains API keys, passwords, and personal data. AgentVeil detects 39 PII & secret types, masks them before AI sees them — then restores on response.

65
8
100% credibility
Found Mar 01, 2026 at 56 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

Agent Veil is a privacy proxy that anonymizes personal data in AI conversations, blocks prompt injections, routes across multiple providers, and enforces compliance without changing your code.

How It Works

1
🔍 Discover safe AI protection

You hear about Agent Veil, a helpful shield that keeps your personal info safe when chatting with AI tools like Claude or Cursor.

2
⚙️ Set it up easily

Run one quick command on your computer, and it installs everything you need, starting a background helper automatically.

3
🛡️ Connect your AI buddy

Point your favorite AI coding tool to the local helper with a simple setting change, and it's instantly protected.

4
💬 Chat freely with real details

Share phone numbers, IDs, or emails in your conversations, knowing they're safely hidden from the AI.

5
See the magic happen

Your responses come back with real details restored perfectly, while everything stays secure behind the scenes.

6
📊 Check safety reports

Review easy audits of your AI instructions or compliance checks to ensure everything meets safety rules.

Secure AI superpowers unlocked

Now you use powerful AI tools confidently, with personal data protected and full peace of mind every time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 56 to 65 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is agentveil?

AgentVeil is a Go-based security proxy that intercepts AI API calls from tools like Cursor, Claude Code, or Aider, scanning prompts for 39 PII types (CCCD, SSN, emails) and secrets (API keys, PEMs) before masking them for the LLM—then seamlessly restoring real data in responses. It solves the nightmare of business code contains syntax errors or hard-coded API keys leaking to GitHub AI models via code github copilot or vs code contains emphasized items. Drop it in with a single env var like OPENAI_BASE_URL=localhost:8080, no code changes needed.

Why is it gaining traction?

Zero-config setup via Docker or native install auto-injects env vars for code github cli and code github repository workflows, plus CLI commands like agentveil scan/audit check code github readme or skill.md for risks. Multi-provider routing (OpenAI, Anthropic, Gemini) with failover, prompt injection defense, and compliance scoring for Vietnam AI Law/GDPR make it a drop-in shield for contains code in python/SQL prompts. Role-based masking (admin full view, viewer partial) fits teams fast.

Who should use this?

DevOps handling code github online with PII/secrets in prompts, AI agent builders using Cursor/Aider where vs code contains emphasized items meaning leaked keys, or compliance officers auditing liquid code contains/m code contains in GitHub markdown. Ideal for Vietnamese teams needing CCCD/TIN protection in code github comment flows.

Verdict

Try it for prototyping AI security—CLI and SDKs (Python, Go, Node, LangChain) deliver quick wins despite 43 stars and 1.0% credibility score signaling early maturity. Solid docs and tests, but watch for production scaling.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.