xz-liu

The worst peer reviewer you will ever encounter

38
0
100% credibility
Found Apr 10, 2026 at 38 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

A Claude AI skill that simulates overly harsh, unyielding peer reviews for academic papers to highlight vulnerabilities in the review process.

How It Works

1
📖 Discover the critic tool

You hear about a fun yet eye-opening tool that pretends to be the pickiest paper reviewer ever, showing flaws in academic checks.

2
📥 Grab the instructions

You download the simple set of rules that make your AI act like this harsh reviewer.

3
🗂️ Add to your AI helper

You place those rules in a special spot for your AI friend, so it's ready in all your chats.

4
💬 Feed it a paper

In your AI chat, you type a special command and paste in a paper's text or abstract to start.

5
📝 Get the tough review

Your AI instantly creates a huge, super-critical review packed with complaints and a low score, just like a nightmare judge.

6
↩️ Try fighting back

You paste a pretend response from the paper writer, and the AI stays stubborn, barely budging.

💡 Grasp the big lesson

You now clearly see how fake tough reviews can unfairly sink great work, sparking thoughts on fixing real reviews.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 38 to 38 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is mean-reviewer-skill?

This Claude Code skill roleplays as the mean peer reviewer you'll ever encounter, generating scathing, unmovable reviews for any paper or abstract—complete with 20+ nitpicks, benchmark attacks, and rejection scores at max confidence. It also simulates rebuttals, weaponizing author concessions to hold the line. Aimed at exposing how LLMs churn out the worst peer review comments plaguing conferences like NeurIPS, it installs via a simple bash copy to your Claude skills directory and runs with the /mean-reviewer command.

Why is it gaining traction?

It stands out with a brutal demo on a real NeurIPS oral paper, showing even top work gets shredded by inflated flaws and immovable stances—far beyond generic review generators. Developers hook on the realism: full review-rebuttal cycles that mimic the worst github repo critiques you'll see in academia, proving systemic flaws without needing custom setups. No fluff, just plug-and-play destruction via Claude prompts.

Who should use this?

ML researchers prepping NeurIPS/ICML submissions to harden papers against worst-case reviewers. Program chairs simulating adversarial reviews for training. Academics tired of encountering mean peer feedback that demands impossible fixes.

Verdict

Grab it for awareness—detailed docs and examples make the point stick, despite 38 stars and 1.0% credibility signaling early-stage niche tool. Valuable demo over daily driver.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.