prime-radiant-inc

Two skills for adversarial code review (single-model PAR and multi-model MMAR with cross-critique), plus a fixture-based eval suite.

15
0
100% credibility
Found May 15, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A collection of skills and scripts for running parallel adversarial code reviews using multiple AI coding agents that independently review code, cross-critique findings, and synthesize a final report, including an evaluation suite to measure accuracy.

How It Works

1
🔍 Discover the tool

You hear about a clever way to supercharge code reviews by teaming up multiple AI coding experts to check your work more reliably.

2
🤝 Gather your AI helpers

Connect the AI coding assistants you already use, like trusted friends who specialize in spotting code problems.

3
⚙️ Choose your review style

Pick between a quick team check for everyday code or a deeper debate for important security reviews.

4
🚀 Launch the review

Point the tool at your code file or folder, and watch the AI team dive in together with excitement.

5
💬 They debate and refine

The assistants review independently, challenge each other's ideas, and combine the best insights.

6
📊 Test its smarts

Run quick checks with sample problems to see how well it catches issues without mistakes.

Get trustworthy results

Receive a clear, polished report of real problems in your code, giving you confidence to fix and ship safely.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is parallel-adversarial-review?

This Python tool delivers two skills for adversarial code review: single-model parallel runs where subagents compete on findings, and multi-model pipelines that run independent reviews, cross-critiques to catch hallucinations, and final synthesis into a deduplicated report. It solves flaky AI code reviews by aggregating outputs from CLIs like claude, codex, or gemini, prioritizing worst-severity disagreements. Developers get CLI commands like `python mmar.py review path/to/code --reviewers claude,codex` plus a fixture-based eval suite testing recall and precision on bugs like SQL injection or resource leaks.

Why is it gaining traction?

It stands out by wrapping any installed coding-agent CLI without custom APIs, using mock dirs for cheap CI evals and configurable TOML for easy swaps—like handling two GitHub accounts on the same computer or two GitHub runners on one machine. The cross-critique grid drops false positives better than solo models, with evals hitting 1.0 F1 on fixtures. Devs dig the adversarial edge, akin to two skills of an entrepreneur spotting risks others miss.

Who should use this?

Security engineers auditing hot-path code for leaks or injections; backend teams pre-merge reviewing diffs with multiple models; open-source maintainers needing deterministic evals without live API costs. Ideal for Python shops with claude or gemini CLIs, like those juggling two GitHub SSH keys or VSCode setups for parallel workflows.

Verdict

Worth a spin for high-stakes reviews if you run coding CLIs—solid docs, 15 unit tests, and passing evals make it credible despite 1.0% score and low stars signaling early maturity. Fork and extend the adapters for your stack.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.