AgentShepherd

🌟 Open Source AI Agent Security Infrastructure — intercepts and blocks dangerous agent behaviors before they happen. Just one command! Join us to build safer Human-AI Symbiosis!

392
24
100% credibility
Found Feb 04, 2026 at 22 stars 18x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

AgentShepherd is a local safety layer that intercepts and blocks risky actions from AI agents like reading secrets or running destructive commands.

How It Works

1
🔍 Discover the risk

You hear AI helpers can accidentally read passwords or delete important files.

2
📥 Grab the safety tool

Copy-paste one easy command to add a protector between your AI and the web.

3
🚀 Turn it on

Start the guard and tell your AI to chat through your own computer instead.

4
🛡️ Safe AI magic

Your AI works normally but sneaky dangerous requests get quietly stopped.

5
⚙️ Customize shields

Easily add rules to block specific risks without stopping anything.

Worry-free power

Enjoy smart AI help that keeps your secrets safe and computer protected.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 22 to 392 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is agentshepherd?

Crust (formerly AgentShepherd) is a Go-based security gateway that sits between your AI agents and LLM providers like OpenAI or Anthropic, intercepting tool calls to block dangerous actions such as reading .env files, running destructive commands, or exfiltrating data. Install with one curl command, point agents to localhost:9090, and it handles the rest—auto-routing to providers based on model names while passing client auth. Developers get built-in rules against credential theft and shell history leaks, plus custom YAML rules with hot reload via `crust add-rule file.yaml`.

Why is it gaining traction?

It stands out with near-zero latency proxying, universal compatibility for frameworks like LangChain, OpenClaw, or GitHub Copilot agents, and simple CLI management (`crust start --auto`). The hook is proactive Layer 0/1 filtering on requests/responses, plus optional OS sandboxing—far easier than baking security into agent code or relying on provider limits. Hot-reloadable rules let you adapt to new threats without restarts, and telemetry logs everything locally.

Who should use this?

AI agent builders using tool-calling LLMs, especially those with GitHub Copilot IntelliJ setups or OpenAI agents running local code execution. Ideal for security-conscious devs at startups prototyping autonomous agents, or teams evaluating agent security benchmarks before production. Skip if you're doing prompt-only inference without tools.

Verdict

Try it for local agent sandboxes—solid docs, CLI, and tests make it dev-friendly despite 83 stars and 1.0% credibility score signaling early maturity. Production? Wait for more battle-testing and sandbox polish.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.