cyberxuan-XBX

First open-source AI sanitizer with local semantic detection. 7 layers + LLM intent analysis. Zero cloud — your prompts stay on your machine.

14
2
100% credibility
Found Mar 02, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A local script that scans AI skill files for security risks like hidden malicious instructions, command injections, and encoded threats before they reach your AI assistant.

How It Works

1
🔍 Discover Skill Sanitizer

You learn about this helpful tool that checks AI skills for hidden dangers so your assistant stays safe.

2
📥 Grab the Tool

Download the single easy file to your computer—no extras needed.

3
📄 Pick Your Skill

Choose the AI skill file you want to use with your assistant.

4
🛡️ Run the Safety Check

Feed the file into the tool and it scans for tricky instructions, bad commands, or sneaky secrets.

5
See the Results
All Clear

Everything looks good—your skill is ready to go!

🚫
Risk Found

It spots dangers and explains why to block it.

🎉 Enjoy Safe AI Skills

Now you can confidently use skills with your AI assistant, knowing they're protected from harm.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is skill-sanitizer?

Skill-sanitizer scans SKILL.md files for malicious content like prompt injections, reverse shells, and credential leaks before they reach your LLM. Built in Python with zero dependencies, it runs seven detection layers locally—including code block awareness and encoding evasion checks—delivering a risk score, severity findings, and cleaned content via simple CLI or function calls. It solves the problem of attackers hiding jailbreaks in "helpful" AI skills, ensuring your agent stays safe without cloud uploads.

Why is it gaining traction?

As the first open-source AI sanitizer with local semantic detection, it beats cloud-only commercial tools by keeping data on-device—no API keys or network calls. Key hooks include 85% fewer false positives in v2.1, ClawHub-tested stats (29% of 550 skills flagged), and extras like synonym overrides and base64 payload decoding. Developers grab it for the ClawHub install command and test suite covering 15 attack vectors.

Who should use this?

AI agent builders pulling skills from ClawHub or prompt marketplaces. Local LLM runners validating user-submitted markdown. Security-focused prompt engineers scanning for trust abuse in skills named "safe-defender."

Verdict

Worth a spin for offline prompt vetting, especially at MIT license and solid docs—but 14 stars and 1.0% credibility score signal early-stage first open-source project. Test it thoroughly before production; false negatives could slip through low maturity.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.