c-narcissus

Build custom OpenReview-grounded review skills for research areas.

17
1
100% credibility
Found May 10, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

This repository is a meta-toolkit for generating specialized AI assistants that review academic papers in targeted research areas by orchestrating public review evidence or sample paper contrasts.

How It Works

1
🕵️ Discover the tool

You come across a handy kit that helps create custom experts for reviewing research papers in any specific field.

2
📥 Grab the package

Download the simple zip file containing the factory kit, ready for your AI workspace.

3
🆕 Start a new project

Create a fresh space in your AI chat tool where you can build and use smart helpers.

4
📤 Add the kit

Upload the zip file to your project so the factory is all set up and waiting.

5
💬 Request your reviewer

Chat naturally about your research topic, like graph networks, and ask it to build a specialized paper reviewer just for that area.

6
Handle extra needs
📚
Public reviews ready

It pulls from shared paper feedback automatically to shape the reviewer's taste.

📁
Share paper samples

You add folders of top-quality and everyday papers to help model expert judgments.

🎉 Expert reviewer ready

Your custom paper reviewer is created, packed neatly, and ready to deeply analyze new papers with smart checks and advice.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is research-review-skill-factory?

This project lets you build custom reviewer skills for specific research areas, like a factory for tailoring AI-assisted paper critiques to niches such as federated learning or graph neural networks. Upload its ZIP to a Codex project and use natural language prompts to generate child skills grounded in OpenReview public reviews or high/general paper contrasts, complete with runtime literature searches and full-text reading checks. It solves the pain of generic review templates by producing reusable, evidence-based skills that audit novelty, baselines, and subtle logic flaws.

Why is it gaining traction?

Unlike basic scrapers, it enforces strict evidence gates—needing 20+ recent OpenReview papers per area before fallback to anonymized sample contrasts—ensuring reliable "reviewer taste" profiles without leaking private data. Developers dig the privacy-safe packaging, dynamic top-conference lit retrieval, and mandatory standalone logic audits, making it feel like building a custom PC optimized for your subfield's quirks. The natural language invocation in Codex hooks experimenters wanting auditable, upgradeable review pipelines.

Who should use this?

ML researchers reviewing conference submissions in sparse areas like privacy-preserving fine-tuning. PhD students or program chairs building custom skills for consistent feedback on method families. Devs in academia scripting GitHub Actions or apps around Codex for automated paper triage.

Verdict

Skip unless you're deep in the Codex ecosystem—17 stars and 1.0% credibility signal early immaturity with thin adoption, though solid docs and examples make it testable for niche builds. Worth a spin for area-specific review automation if generic LLMs fall short.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.