gadievron

A prompt-based pipeline for finding, validating, and proving vulnerabilities using LLM sub-agents.

41
5
100% credibility
Found Feb 23, 2026 at 24 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

A guided prompt system for AI to scan codebases, validate potential vulnerabilities, and generate proof-of-concept exploits for confirmed issues.

How It Works

1
🔍 Discover the vulnerability checker

You hear about a smart system that helps scan code for hidden security weak spots using guided AI steps.

2
📁 Prepare your code collection

You gather all the files from the software project you want to check, like putting papers on your desk.

3
🚀 Kick off the safety review

You begin the process, and it first makes a complete list of everything in your code to ensure nothing is missed.

4
Quick first scan
Potential issue found

Something looks risky, so it moves to a deeper investigation.

No real issues

Everything checks out clean, and you get a safe report right away.

5
🔬 Deep dive if needed

For suspicious spots, it explores attack ideas, tests paths, and builds proof attempts step by step.

6
🔍 Reality check

It verifies every claim against your actual code to make sure nothing is made up or wrong.

📊 Receive your secure report

You end up with a clear list of real vulnerabilities, complete with proofs, so you know exactly what to fix.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 24 to 41 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is exploitation-validator?

This is a prompt-based hierarchical pipeline that uses LLM sub-agents to scan codebases for vulnerabilities like command injection, validate findings aren't hallucinations, and generate proof-of-concept exploits. It solves the problem of unreliable AI-driven vuln hunting by enforcing structured stages: inventorying code, quick checks, deep analysis, sanity validation, and final ruling. Developers get JSON outputs with checklists, validated findings, and working PoCs, all in a defensive lab setup.

Why is it gaining traction?

It stands out with its sub-agent workflow that cross-checks LLM outputs against real code—files exist, lines match, flows are reachable—reducing false positives that plague basic prompt tools. The pipeline's attack trees, hypotheses tracking, and must-gates ensure methodical coverage without hedging, delivering reproducible exploit proofs. Developers hook on the zero-code setup: just feed prompts to an LLM orchestrator for systematic validation.

Who should use this?

Security engineers auditing internal codebases for exploitable flaws before production. Pentest teams validating LLM-generated vuln reports in red-team exercises. DevSecOps leads integrating defensive vuln finding into CI/CD pipelines for open-source forks.

Verdict

Try it for proof-of-concept vuln workflows if you're experimenting with LLM agents—18 stars and 1.0% credibility score reflect its early README-only stage, but solid docs make it a low-risk starter. Needs real code and tests to mature beyond prototypes.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.