AgentSeal

AgentSeal / agentseal

Public

Security validator for AI agents - find out if your agent can be hacked

11
0
100% credibility
Found Mar 04, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

AgentSeal is a security scanner for AI agents that sends over 150 attack probes to test for vulnerabilities like prompt extraction and injection, providing a trust score, detailed breakdown, and fix recommendations.

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is agentseal?

AgentSeal is a Python security validator for AI agents that probes them for prompt extraction and injection vulnerabilities, revealing if your agent can be hacked. Point it at a directory with `agentseal scan ./` and it auto-discovers system prompts in Python, JS, YAML configs, Ollama Modelfiles, even MCP servers for Claude Desktop or Cursor, then runs 150+ attacks via OpenAI, Anthropic, Ollama, or HTTP endpoints. You get trust scores, leaked probe details, and auto-generated hardening fixes, like a Mandiant security validator tailored for agents.

Why is it gaining traction?

Unlike generic github security scanning tools, it specifically targets AI risks like boundary confusion, tool exploits, and RAG poisoning, with fleet scanning for multi-agent projects and CI integration via security github actions. Developers love the zero-setup discovery—no manual prompt extraction—and remediation that spits out copy-paste clauses for your prompts. It fingerprints defenses like Llama Guard or Azure Prompt Shield, plus mutations to test encoding bypasses, making it a practical pentest suite for LLM apps.

Who should use this?

AI engineers building LangChain or CrewAI agents need it to audit prompt leaks before deployment. Security teams reviewing github security copilot extensions or custom agents in production will appreciate SARIF exports for github security advisories. Devs hardening MCP tools or RAG pipelines against injection get quick wins from its security txt validator-style reports.

Verdict

Try it for early AI agent security audits—solid CLI and reports despite 11 stars and 1.0% credibility score signaling beta maturity. Pair with github security policy scans; lacks broad tests but delivers real value for agent builders now.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.