transilienceai

Open-source static AI security scanner — prompt injection across 15 source types, broken LLM-as-judge detection, AI dependency SBOM. Beats Semgrep AI ruleset 2x on a labelled corpus.

16
2
100% credibility
Found Apr 22, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Whitney is an open-source security scanner that identifies prompt injection vulnerabilities in Python codebases and generates inventories of AI dependencies.

How It Works

1
🔍 Discover Whitney

You hear about Whitney, a friendly tool that checks your AI project for hidden security risks like sneaky instructions that could trick the AI.

2
💻 Set up Whitney

You bring Whitney onto your computer with a simple, quick step so it's ready to help.

3
📁 Choose your project

You select the folder containing your AI app's code, and Whitney gets to work examining it.

4
See the security check results

Whitney quickly shows a clear list of potential dangers, like risky ways user input reaches the AI, with easy explanations and fixes.

5
📋 Review and act

You look at the highlighted issues, understand why they matter, and make your app stronger.

6
🔧 Check AI building blocks

Optionally, Whitney lists all the AI parts your project uses, spotting any outdated or risky ones.

Your AI app is secure

With the risks fixed, your project feels safe and reliable, giving you peace of mind to build confidently.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is whitney?

Whitney is a Python-based open-source static AI security scanner that detects prompt injection vulnerabilities across 15 source types like HTTP requests, RAG retrievals, web fetches, and agent handoffs in LLM apps. It wraps curated Semgrep rules to flag critical sinks such as LangChain SQL/PAL chains, outputs findings with CWE/OWASP tags in table or JSON via `whitney scan ./repo`, and generates an AI dependency SBOM with `whitney sbom ./repo`. By default, it runs with zero LLM API costs and reproducible results.

Why is it gaining traction?

It crushes commodity scanners like Semgrep's AI ruleset (2x better F1 on a labeled corpus, 100% recall/90% precision) by catching indirect injections others miss, without custom SAST engines. Opt-in LLM triage suppresses false positives on real defenses like Bedrock Guardrails, and the SBOM flags vulnerable SDKs like old LangChain. As an open source static code analysis tool for Python, it slots into GitHub Actions or CI like open source GitHub alternatives.

Who should use this?

Python devs building LangChain/CrewAI apps prone to prompt injection, security engineers auditing AI repos for OWASP LLM Top 10 risks, and teams needing quick AI SBOMs beyond standard open source static analyzers. Ideal for RAG/agents where web/email/DB fetches hit LLMs unfiltered.

Verdict

Grab it for Python AI security scans—benchmarks beat alternatives, CLI is dead simple, docs solid with repro evals. At 16 stars and 1.0% credibility, it's early beta (Python-only now), so test on your corpus before prod; roadmap adds JS/Go.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.