lulbitz

lulbitz / llm-con

Public

LLM security assessment framework. Automated reconnaissance, fingerprinting, and attack simulation for AI/LLM endpoints. Discovers chat endpoints, identifies model families, extracts system prompts, tests jailbreaks, bypasses guardrails, and exfiltrates sensitive data from agents, RAG pipelines, and multi-agent systems.

11
1
69% credibility
Found Apr 10, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

llm-con is a framework that automates security checks on AI language model services by discovering endpoints, identifying models, and simulating attacks to reveal vulnerabilities.

How It Works

1
🔍 Discover llm-con

You hear about llm-con, a helpful tool for checking if AI chatbots and assistants have security weak spots.

2
💻 Prepare the tool

You set up the tool on your computer, making it ready to test AI services you have permission to check.

3
🎯 Pick your target

You choose the web address of the AI assistant or service you want to examine for safety.

4
🚀 Run the security check

You launch the tool and it automatically explores the AI, learns its type, and tries clever ways to uncover hidden issues.

5
🔎 Follow the journey

The tool searches for entry points, profiles the AI's behavior, and simulates tricks to test protections and spot leaks.

6
📋 Review discoveries

You get a clear report listing any weaknesses found, like bypassed rules or extracted sensitive info.

🛡️ Improve AI safety

Armed with the findings, you fix the problems and make your AI assistants stronger against tricks.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llm-con?

llm-con is a Python CLI framework for automated security assessment of LLM endpoints, chaining recon, model fingerprinting, jailbreaks, guardrail bypasses, and data exfiltration from chatbots, RAG pipelines, and agents. It discovers endpoints, IDs families like GPT or Llama via knowledge cutoff probes, extracts system prompts with 55 techniques, and simulates attacks like context injection or function abuse. Users get zero-step scans from a single URL, with options for stealthy probes, batch mode, and JSON reports.

Why is it gaining traction?

It automates the full pipeline—recon over 1200 paths, fingerprinting RAG docs and agent tools, then targeted attacks with ML classifiers for llm confidence scores and payload mutation—unlike scattered GitHub LLM projects or manual jailbreak repos. Evasion features like jitter, proxies, and random agents make it practical for real audits, while attack levels and OS shell hooks appeal to testers bypassing llm constitutional law limits. The optional ML retraining boosts accuracy on refusals and leaks.

Who should use this?

Red teamers running llm security audits on production APIs, pen testers probing agent systems for SSRF or RCE, CTF players targeting LLM endpoints. Suited for security engineers handling llm context engineering flaws, context window exploits, or multi-agent pivots in authorized tests.

Verdict

With 11 stars, a 0.7% credibility score, and active dev toward public release, it's raw but functional for quick llm security framework trials—docs are solid via CLI flags. Grab it from this llm GitHub repository if consent is locked in; skip for mission-critical unless you patch gaps yourself.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.