LLM security assessment framework. Automated reconnaissance, fingerprinting, and attack simulation for AI/LLM endpoints. Discovers chat endpoints, identifies model families, extracts system prompts, tests jailbreaks, bypasses guardrails, and exfiltrates sensitive data from agents, RAG pipelines, and multi-agent systems.
llm-con is a framework that automates security checks on AI language model services by discovering endpoints, identifying models, and simulating attacks to reveal vulnerabilities.
How It Works
You hear about llm-con, a helpful tool for checking if AI chatbots and assistants have security weak spots.
You set up the tool on your computer, making it ready to test AI services you have permission to check.
You choose the web address of the AI assistant or service you want to examine for safety.
You launch the tool and it automatically explores the AI, learns its type, and tries clever ways to uncover hidden issues.
The tool searches for entry points, profiles the AI's behavior, and simulates tricks to test protections and spot leaks.
You get a clear report listing any weaknesses found, like bypassed rules or extracted sensitive info.
Armed with the findings, you fix the problems and make your AI assistants stronger against tricks.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.