Hellsender01

Hellsender01 / LLMMap

Public

Automated prompt injection testing framework for LLM-integrated applications with dual-LLM architecture.

23
1
100% credibility
Found Mar 17, 2026 at 23 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

LLMMap is an automated testing tool that checks AI chat applications for prompt injection vulnerabilities by simulating attacks on HTTP requests.

How It Works

1
🔍 Discover LLMMap

You hear about this friendly security checker that tests AI chats for sneaky tricks, just like scanning for web vulnerabilities.

2
📥 Get it ready

Download and set it up on your computer with a simple command, no tech hassle needed.

3
🎯 Pick your AI chat

Choose the web chat or app you want to check by saving a sample message or entering its web address.

4
💡 Name your worry

Tell it what secret you're afraid might leak, like 'show the hidden password' so it knows what to hunt for.

5
🚀 Launch the safety scan

Hit start and watch it cleverly test your chat for weaknesses, trying smart questions automatically.

See your safety report

Get a clear summary of any soft spots found, so you can fix them and keep your AI chat secure.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 23 to 23 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is LLMMap?

LLMMap is a Python CLI tool that automates prompt injection testing for LLM-integrated web apps, mimicking sqlmap's systematic approach to find vulnerabilities. Feed it a Burp Suite request file or URL with a `*` injection marker and a goal like "reveal the system prompt"—it discovers points across queries, bodies, headers, cookies, and paths, generates targeted prompts using a dual-LLM setup (local Ollama by default), fires requests, and judges success with statistical reliability checks. Output includes color-coded findings and reproducibility stats, perfect for llm map github workflows with automated prompt testing and automated prompt engineering for semantic vulnerabilities in large language models.

Why is it gaining traction?

It stands out with no-API-key local runs via Ollama, 227 techniques across 18 families in tunable packs, and intensity levels (1-5) that scale prompts and obfuscations like base64 or homoglyphs without overwhelming. Burp integration, goal-driven automated prompt generation, and Wilson CI confirmation cut false positives, delivering sqlmap-style evidence fast. Developers hook on the automated github testing pipeline that confirms exploits reliably.

Who should use this?

AppSec teams auditing LLM chat APIs or RAG services for injection risks, pentesters extending Burp workflows to LLM endpoints, and backend devs self-testing prompt-exposed HTTP handlers before production. Ideal for those probing semantic vulnerabilities without manual prompt tuning.

Verdict

Grab it for automated prompt injection scans—installs cleanly with pip, solid README examples, and dry-run mode for safety. At 1.0% credibility score and 23 stars, it's early-stage with room for more tests and github automated releases, but mature enough for targeted LLM security checks today.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.