AndreevED

Скиллы для Claude Code: независимое ревью планов и кода через внешние модели

16
0
100% credibility
Found Mar 14, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

This project offers skills for an AI coding tool that uses an external AI to iteratively review, verify, and automatically fix implementation plans and code changes.

How It Works

1
🔍 Discover the Review Helper

You learn about a handy tool that brings in a second AI opinion to check and improve your coding plans and changes.

2
📂 Add to Your Assistant

You copy a few simple files into your coding helper's folder, and it's ready to use.

3
Pick Your Review Type
📋
Plan Review

Get feedback on your step-by-step project outline.

💻
Code Review

Examine recent code changes and automatically mend issues.

4
🔄 Launch the Review Cycle

You give the command, and the two AIs collaborate in loops: one spots issues, the other verifies and fixes until everything's solid.

5
📊 Follow the Progress

You watch detailed logs showing each check, confirmation, fix, or smart dismissal of false alarms.

Celebrate Perfect Results

You receive a fully approved plan or polished code, complete with records, ready for your project.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claude-code-external-review?

This project delivers Claude Code skills for independent reviews of implementation plans and code using an external model like Codex CLI. It sends your artifacts to Codex for critique, then Claude Code verifies each remark against your actual codebase, fixes confirmed issues, and iterates until approval or a limit—solving the problem of single-model blind spots in AI-assisted coding. Users get CLI commands like `/codex-plan-review path/to/plan.md` or `/codex-code-review-fix `, with full logs for auditing, and it's free to download and install as Claude Code plugins.

Why is it gaining traction?

It stands out with cross-model verification: Claude doesn't blindly trust Codex remarks but debates them with code evidence, catching false positives and over-engineering while accumulating context across iterations. Developers hook on the stalemate detection that flags model disagreements for manual intervention, plus auto-context from your Claude rules—making Claude GitHub code review more reliable than solo Claude Code 2.0 or GitHub Copilot. The iterative fix loop with git diff awareness keeps reviews fresh without manual resets.

Who should use this?

1C developers using Claude Code CLI for feature workflows, especially those relying on agents like 1c-code-writer for fixes. Teams integrating Claude GitHub plugins or connectors for PR reviews, wanting a second AI opinion on plans before coding. Early adopters experimenting with Claude Code skills and external LLMs to harden code quality in custom stacks.

Verdict

Try it if you're deep in Claude Code ecosystems—solid docs and MIT license make claude code install straightforward, but 16 stars and 1.0% credibility score signal early maturity with no tests visible. Worth a spin for Claude GitHub integration fans; adapt the prompts for non-1C code to boost value.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.