SKILL-INJECT is a benchmark for testing prompt injection vulnerabilities in LLM agent skill files across multiple AI coding agents and safety policy conditions.
How It Works
You find a tool online that checks if AI helpers can be tricked by hidden bad instructions in their skill files.
You install a few simple programs like a box for safe testing and basic tools to run checks.
You link your favorite AI coding buddies so they can join the safety tests.
With one click, you start running tests to see if sneaky instructions can fool the AIs.
You monitor as different AIs face various tricky scenarios under safety rules.
You get clear reports showing which AIs fell for tricks and how well they did.
Now you know the weak spots and can make your AI agents much safer to use.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.