pruiz

pruiz / CodeCome

Public

An Agentic vulnerability research harness for everybody..

17
0
69% credibility
Found May 13, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

CodeCome is an open-source workflow that guides AI agents through auditing source code for vulnerabilities, generating structured findings, sandbox validations, exploit proofs, and reports as plain files.

How It Works

1
👀 Discover CodeCome

You hear about a helpful tool that lets AI safely check your code for hidden problems, turning hunches into solid proof.

2
📁 Add your code

Simply drop the folder with your project's files into a special spot, like preparing ingredients for a recipe.

3
🔍 AI scouts the project

The smart helper reads everything, sketches a safe testing playground tailored just for your code, and notes key spots to watch.

4
💡 Spot possible issues

AI suggests specific weak points with details on why they might break, like a detective listing suspects.

5
Review the clues
Test it out

Move promising ones to a safe area for proof.

Dismiss weak ones

Set aside ideas that don't hold up on second look.

6
🛡️ Prove in safe space

In a protected playground, you run tests and capture real evidence, confirming bugs or ruling them out for good.

📊 Get your security report

Enjoy a clear summary with proofs, notes, and fixes ideas—all in easy files you can share or save forever.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is CodeCome?

CodeCome is a Python-based harness for agentic AI vulnerability research, letting AI agents audit source code through a six-phase workflow: reconnaissance, hypothesis generation, counter-analysis, validation in Docker sandboxes, exploit development, and reporting. Drop a source tree under src/, tweak codecome.yml for scope and focus, then run make targets like phase-2 for candidate findings or phase-4 FINDING=CC-0001 for sandboxed proof. It outputs structured Markdown findings with evidence artifacts, turning vague "maybe a bug" hunches into reviewable PoCs without databases or magic.

Why is it gaining traction?

Unlike black-box agentic LLM vulnerability scanners, CodeCome enforces auditable phases with file-based artifacts you can grep, commit, or diff, plus per-language Docker sandboxes for safe validation. Developers dig the Make-driven GitHub agentic workflows that mix models per phase, deep sweeps on high-risk files, and counter-analysis to kill weak hypotheses early. It's a methodology-first take on agentic vulnerability detection and remediation, built on OpenCode for agentic GitHub Copilot-style coding.

Who should use this?

Solo security researchers auditing open-source projects or internal repos who want AI help without opaque chats. Blue and red teamers needing commit-ready vuln reports from source reviews. Folks experimenting with agentic AI vulnerability management on Python, C++, Node, or Java stacks.

Verdict

Promising early PoC for agentic vulnerability research (17 stars), but low 0.699999988079071% credibility score reflects thin adoption and no CI/tests—clone and walk one finding end-to-end before betting on it. Grab if you're into structured agentic GitHub workflows; otherwise, wait for polish.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.