Bambushu

Bambushu / crucible

Public

Codebase-level adversarial review by a panel of frontier models. A Claude Code skill that runs every file through DeepSeek + Gemini + Kimi + MiniMax in sequence, then has Claude verify the findings against the actual source.

50
7
100% credibility
Found May 13, 2026 at 50 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Crucible is a skill for an AI coding assistant that performs in-depth code reviews using a panel of diverse AI models, verifies findings, and generates a severity-ranked report.

How It Works

1
👀 Discover Crucible

You hear about Crucible, a smart helper that checks your code for problems using a team of expert reviewers inside your AI coding assistant.

2
📥 Add the helper

Download the folder and place it in your AI assistant's special skills area, like adding a new tool to its toolbox.

3
🔗 Connect expert reviewers

Link a service that provides a panel of clever thinking helpers so they can examine your code deeply.

4
🔄 Refresh your assistant

Restart your AI coding session, and the new skill appears ready in the list of commands.

5
Start a review
🔄
Quick check on recent changes

Review just the latest updates you made to spot issues fast.

📂
Full project scan

Examine the entire codebase or specific parts for a thorough look.

6
👀 Preview the plan

See a summary of files to check, time needed, and rough cost, then give the okay to begin.

7
Watch it work

Your assistant runs the review with multiple experts checking each piece, verifying findings, and building a clear report.

📄 Receive your report

Get a polished document ranking issues by importance, with explanations, fixes, and confirmations, ready to improve your code confidently.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 50 to 50 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is crucible?

Crucible is a Claude Code slash command that performs codebase-level adversarial code reviews by routing every file through a panel of frontier models like DeepSeek, Gemini, Kimi, and MiniMax via OpenRouter, then has Claude verify findings against the actual source. Run `/crucible` on diffs, whole repos, globs like `src/api/**/*.ts`, or git ranges from inside Claude Code—it spits out a severity-ranked `report.md` with confirmed issues, false positives flagged, and missed ones added. Built in Python, it handles interruptions with resume and costs $0.10-$0.75 for a typical PR.

Why is it gaining traction?

Unlike single-model tools like GitHub Copilot reviews or Claude personas, Crucible pits structurally different models against each other—sequential chaining or blind consensus—to catch blind spots, with Claude's final pass disputing hallucinations. Deployment context flags cut irrelevant findings (e.g., no multi-worker warnings for desktop apps), and CI wrappers make it pipeline-ready. Devs dig the trustworthy reports that save manual re-reading.

Who should use this?

Backend teams auditing PRs before merge, solo devs deep-diving feature branches, or security-conscious leads running pre-deploy checks on globs like auth files. Ideal for Python/TS/Go repos where you want multi-model scrutiny without switching tools—skip if you're just glancing at one file.

Verdict

Try it for serious audits; 50 stars and 1.0% credibility score signal early days with thin adoption, but polished docs and MIT license lower the risk. Pairs well with Claude Code workflows, though expect tweaks as models evolve.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.