spiffy-oss

Open-source AI artifact scanner. Detect malicious agent skills, MCP servers, and IDE rule files before they run.

16
0
100% credibility
Found Mar 06, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

Artguard provides a guide to generate a scanner that examines AI agent skills, configurations, and rule files for privacy risks, manipulative instructions, and security threats, producing detailed trust reports.

How It Works

1
📰 Discover artguard

You hear about artguard, a helpful scanner that checks AI tools and instructions for hidden privacy risks and sneaky behaviors.

2
📁 Prepare your space

Create a simple new folder on your computer to hold your scanner.

3
🤖 Let AI build it for you

Share the ready-made building guide with your friendly AI coding assistant, and watch it create the full scanner automatically.

4
Scanner is ready

Everything is set up, and your personal AI safety checker is good to go.

5
🔍 Check your files

Point the scanner at your AI skill files, configs, or rule sheets to review them for safety.

6
📊 Review the trust report

See a clear, colorful breakdown of any privacy gaps, tricky instructions, or suspicious patterns in your files.

🛡️ AI tools secured

You now have confidence in your AI helpers, knowing they've passed a thorough safety check.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is artguard?

artguard is an open-source Python CLI scanner for AI artifacts like agent skills, MCP server configs, and IDE rule files such as .cursorrules or .windsurfrules. It detects privacy violations, behavioral manipulations, and static threats in these hybrid code-instruction files that traditional tools miss. Run `artguard scan file.md` or `artguard batch dir/` to get a detailed Trust Profile JSON with scores and findings—no binary safe/unsafe verdict.

Why is it gaining traction?

Unlike code-focused scanners or open source artifactory alternatives, artguard targets AI-specific attack surfaces with LLM-powered semantic analysis for prompt injections and privacy posture checks for hidden data leaks. Developers grab it as a github open source tool for quick audits before running untrusted skills from registries like ClawHub. The structured output feeds policy engines, making it a practical open source artifact manager layer.

Who should use this?

Security engineers at enterprises deploying AI agents or MCP servers need artguard security to review skills.md and manifests pre-install. DevOps teams managing IDE rules in tools like Cursor or Windsurf use it to block goal hijacking. Teams exploring open source claude artifacts or github copilot alternatives self-host it for batch scans on plugin directories.

Verdict

With 16 stars and a 1.0% credibility score, artguard is early-stage—solid docs via README but light on tests and real-world validation. Try the Claude-generated scaffold if you're prototyping AI artifact scanning; contribute patterns to mature it into a viable open source artifact store safeguard.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.