AlexWortega

Claude Code skill for multi-reviewer peer review of academic papers. Adapted from poldrack/ai-peer-review — uses parallel Claude subagents instead of multiple proprietary LLMs.

19
1
100% credibility
Found May 09, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A skill for the Claude Code AI assistant that processes academic paper PDFs to generate multiple independent reviews, a synthesized meta-review, and a concerns table in CSV.

How It Works

1
📚 Discover the paper review helper

You hear about a handy AI skill that gives quick, expert-like feedback on academic papers, saving time on peer reviews.

2
🛠️ Add it to your AI chat

Download the skill and link it into your Claude Code assistant's skills folder, then restart to make it ready.

3
📎 Drop in your paper

In your AI chat, say 'Peer-review this paper: mypaper.pdf' and optionally mention the field like neuroscience.

4
🔍 Watch reviewers analyze

Your AI spins up several independent reviewers who each read the paper deeply and note strengths, concerns, and verdicts.

5
đź“‹ Get all the feedback

Receive individual review files, a smart summary combining their insights, and a clear table showing every concern raised.

âś… Strengthen your paper

Use the thorough, balanced feedback to improve your research and feel confident submitting it.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ai-peer-review-skill?

This Python-based Claude Code skill automates peer review for academic papers: drop in a PDF, DOCX, or text file, and it spawns multiple parallel Claude subagents to generate independent structured reviews—complete with summaries, concerns, and verdicts—plus a synthesized meta-review and a CSV concerns matrix. Outputs land in organized folders as Markdown files, CSV, and JSON, ready for analysis. It's an adapted take on an existing tool, swapping multi-LLM calls for Claude-only agents to simplify academic paper evaluation.

Why is it gaining traction?

It stands out with zero extra API keys beyond your Claude Code setup—free Claude Code install, no pricing hassles—and plugs right into Claude Code CLI or desktop via simple symlinks for Claude Code skills. Developers dig the parallel reviews, optional hard-nosed alignment critic, and arXiv search integration for grounded novelty checks, delivering diverse feedback without cross-model chaos. The Claude GitHub repo integration makes it a quick add for academic workflows.

Who should use this?

Academic researchers prepping arXiv submissions, ML authors seeking rapid multi-angle critiques on novelty and rigor, or journal reviewers needing structured breakdowns. Ideal for anyone in neuroscience, RL, or alignment forums tired of solo reads—say "Peer-review this paper: path/to/manuscript.pdf" with tweaks like num_reviewers or domain.

Verdict

Grab it if you're deep in academic paper cycles and already on Claude Code—solid docs and easy Claude Code download make the 19 stars and 1.0% credibility score forgivable for an early adapted project. Skip for production unless you test outputs; maturity shows in light usage, but it nails the niche.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.