dlowd

Claude Code skill for adversarial review of plans, design docs, and code

11
0
100% credibility
Found Mar 29, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

This project is a review skill for an AI coding assistant that examines code, designs, or projects to find and rank issues by severity, like a critical lawyer spotting contract flaws.

How It Works

1
💡 Discover the critique tool

While working on your coding project with an AI assistant, you learn about a special review tool that acts like a tough editor.

2
📥 Grab the tool

You download the simple review skill made just for your AI helper.

3
Add it to your AI workspace

Place the tool in your AI assistant's skills area, and it's ready to use.

4
🔍 Pick what to review

Decide if you want to check your latest work, a specific file, a topic, or recent changes.

5
Tell the AI to critique
🚀
Auto-review recent work

Let it automatically find and check your most recent project updates.

📄
Review a specific file

Point it to one file you want a deep check on.

🔎
Check a topic or recent changes

Ask it to scan for a certain idea or today's updates across your project.

6
📊 Receive the detailed report

Your AI delivers a clear list of issues, from major problems to nice suggestions, with exact spots highlighted.

🎉 Improve your project

Use the honest feedback to fix flaws, fill gaps, and make your work stronger and more reliable.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claude-skill-critique?

This Claude Code skill turns Anthropic's Claude into an adversarial reviewer for your plans, design docs, code, or recent commits. Invoke it via /critique in the Claude Code CLI—target a file, topic like "use of ffmpeg," or "commits from today"—and it delivers findings ranked by severity, from showstoppers like security flaws to minor suggestions. It solves the blind spots in self-review by being brutally specific, citing exact lines without rewriting your work.

Why is it gaining traction?

Unlike generic Claude Code agents, it enforces honest severity levels and demands mechanisms for speculative issues, cutting vague feedback. The structured output—showstoppers, gaps, inconsistencies—builds trust fast, and auto-detection of recent work fits Claude Code workflows seamlessly. Easy git clone install and forkable customization hook devs already using Claude GitHub integration for code reviews.

Who should use this?

Backend engineers auditing architectural choices before merge. Indie devs validating solo prototypes against edge cases. Teams integrating Claude Code skills into GitHub Actions for automated pr critiques.

Verdict

At 11 stars and 1.0% credibility score, it's raw and single-file simple—docs are solid but expect tweaks for production. Grab it free if you're deep in Claude Code CLI; skip if you need battle-tested claude code agents.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.