EresusSecurity

AI/LLM security scanner — model artifact analysis, prompt injection firewall, MCP agent validation, pickle/safetensors/GGUF fuzzing. Zero false positives.

11
2
89% credibility
Found May 06, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Eresus Sentinel is an alpha-stage security toolkit that scans AI models, prompts, agents, and supply chains using deterministic YAML rules without requiring AI assistance.

How It Works

1
👀 Discover secure AI scanning

You hear about a simple tool that checks AI models and projects for hidden dangers without needing fancy setups.

2
📦 Get it ready quickly

Follow easy steps to add the scanner to your computer, like installing a helpful app.

3
🔍 Scan your AI project

Point it at your models or code folders and let it hunt for security risks automatically.

4
📊 See clear results

Get a simple report showing what it found, like a list of potential problems with explanations.

5
Fix issues or connect to your work
🔧
Quick fixes

Follow tips to clean up risks in your project.

🔗
Ongoing protection

Set it up to watch your work automatically.

🛡️ Your AI is safer now

Rest easy knowing your models and projects are protected from common AI threats.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Eresus-sentinel?

Eresus-sentinel is a Python-based security scanner for AI/LLM projects on GitHub, tackling risks like prompt injections, unsafe model artifacts (pickle, safetensors, GGUF fuzzing), and MCP agent validation. It delivers deterministic scans with zero false positives via YAML rules, outputting SARIF for GitHub CI, plus a CLI (`sentinel scan`, `sentinel firewall`), REST API, and React dashboard for ai llm security testing. Users get instant audits across the AI stack without relying on AI judges.

Why is it gaining traction?

It stands out with deterministic-first scanning—no AI dependency for core findings—making it fast and reliable for CI pipelines, unlike probabilistic tools. The GitHub Action, pre-commit hooks, and MCP proxy for runtime enforcement hook devs building ai agent llm GitHub projects, while SARIF integration lights up security tabs. Zero false positives and full-stack coverage (artifacts to red team probes) make it a practical pick over generic SAST.

Who should use this?

AI/ML engineers securing generative ai llm GitHub repos, teams auditing ai llm projects for supply chain risks or MCP agents, and DevSecOps folks adding ai llm security assessment to PRs. Ideal for ai llm security jobs involving model downloads from HuggingFace or OpenAI llm GitHub integrations.

Verdict

Worth evaluating for ai llm security tools in early pipelines—strong CLI/docs and Docker support despite alpha status, 11 stars, and 0.9% credibility score. Maturity lags (experimental features), so test in non-prod first.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.